Test Report: Docker_Linux_crio 21656

                    
                      8fdbaae537091671bd14dcf95cc23073d72e85b2:2025-09-29:41680
                    
                

Test fail (6/332)

Order failed test Duration
37 TestAddons/parallel/Ingress 163.34
98 TestFunctional/parallel/ServiceCmdConnect 603.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 600.66
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
154 TestFunctional/parallel/ServiceCmd/Format 0.53
155 TestFunctional/parallel/ServiceCmd/URL 0.52
x
+
TestAddons/parallel/Ingress (163.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-721094 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-721094 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-721094 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [f651f3ab-7846-4142-8005-abb8c834ca8e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [f651f3ab-7846-4142-8005-abb8c834ca8e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.003399706s
I0929 10:53:14.047538  132495 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-721094 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.855915563s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-721094 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-721094
helpers_test.go:243: (dbg) docker inspect addons-721094:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e1529d311d8e3bb4637ad9b4afe78760b75cbd4f15d67fea0aa8521d45f84423",
	        "Created": "2025-09-29T10:50:11.887917976Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 134487,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:50:11.921044761Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/e1529d311d8e3bb4637ad9b4afe78760b75cbd4f15d67fea0aa8521d45f84423/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e1529d311d8e3bb4637ad9b4afe78760b75cbd4f15d67fea0aa8521d45f84423/hostname",
	        "HostsPath": "/var/lib/docker/containers/e1529d311d8e3bb4637ad9b4afe78760b75cbd4f15d67fea0aa8521d45f84423/hosts",
	        "LogPath": "/var/lib/docker/containers/e1529d311d8e3bb4637ad9b4afe78760b75cbd4f15d67fea0aa8521d45f84423/e1529d311d8e3bb4637ad9b4afe78760b75cbd4f15d67fea0aa8521d45f84423-json.log",
	        "Name": "/addons-721094",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-721094:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-721094",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e1529d311d8e3bb4637ad9b4afe78760b75cbd4f15d67fea0aa8521d45f84423",
	                "LowerDir": "/var/lib/docker/overlay2/a870079a6cdd9d3ecd2657868cbe7e32aafe5d7da3515e1ec6ba8dbc70263961-init/diff:/var/lib/docker/overlay2/6f46731317f9b9f8dbf1d4a7e01ff0254d8f3e30fed041625466f4497703adcb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a870079a6cdd9d3ecd2657868cbe7e32aafe5d7da3515e1ec6ba8dbc70263961/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a870079a6cdd9d3ecd2657868cbe7e32aafe5d7da3515e1ec6ba8dbc70263961/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a870079a6cdd9d3ecd2657868cbe7e32aafe5d7da3515e1ec6ba8dbc70263961/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-721094",
	                "Source": "/var/lib/docker/volumes/addons-721094/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-721094",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-721094",
	                "name.minikube.sigs.k8s.io": "addons-721094",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a4dce98f65ad604a20d67dc0873f7f13ecd9f4066605940e4c9603280f8d5fc1",
	            "SandboxKey": "/var/run/docker/netns/a4dce98f65ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-721094": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:b7:8f:29:82:b9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3806e86fbaf1355c765471a22579fa86e934da880b9a4862aa4a78e60c698c5c",
	                    "EndpointID": "f8dfe2a879b34e3b6bc76ec05e04d1f592408a6b4ec33e6b7e95ac912db29cca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-721094",
	                        "e1529d311d8e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-721094 -n addons-721094
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-721094 logs -n 25: (1.219425143s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-048358 --alsologtostderr --binary-mirror http://127.0.0.1:35065 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-048358 │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │                     │
	│ delete  │ -p binary-mirror-048358                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-048358 │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:49 UTC │
	│ addons  │ enable dashboard -p addons-721094                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │                     │
	│ addons  │ disable dashboard -p addons-721094                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │                     │
	│ start   │ -p addons-721094 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:52 UTC │
	│ addons  │ addons-721094 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:52 UTC │ 29 Sep 25 10:52 UTC │
	│ addons  │ addons-721094 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:52 UTC │ 29 Sep 25 10:52 UTC │
	│ addons  │ enable headlamp -p addons-721094 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:52 UTC │ 29 Sep 25 10:52 UTC │
	│ addons  │ addons-721094 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:52 UTC │ 29 Sep 25 10:52 UTC │
	│ addons  │ addons-721094 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:52 UTC │ 29 Sep 25 10:52 UTC │
	│ addons  │ addons-721094 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:52 UTC │ 29 Sep 25 10:52 UTC │
	│ addons  │ addons-721094 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:52 UTC │ 29 Sep 25 10:53 UTC │
	│ ip      │ addons-721094 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:52 UTC │ 29 Sep 25 10:52 UTC │
	│ addons  │ addons-721094 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:52 UTC │ 29 Sep 25 10:52 UTC │
	│ addons  │ addons-721094 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:52 UTC │ 29 Sep 25 10:52 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-721094                                                                                                                                                                                                                                                                                                                                                                                           │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:52 UTC │ 29 Sep 25 10:52 UTC │
	│ addons  │ addons-721094 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:52 UTC │ 29 Sep 25 10:53 UTC │
	│ addons  │ addons-721094 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:53 UTC │ 29 Sep 25 10:53 UTC │
	│ addons  │ addons-721094 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:53 UTC │ 29 Sep 25 10:53 UTC │
	│ ssh     │ addons-721094 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:53 UTC │                     │
	│ ssh     │ addons-721094 ssh cat /opt/local-path-provisioner/pvc-11c6a651-5517-404a-8a85-c61e4ebf2afe_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:53 UTC │ 29 Sep 25 10:53 UTC │
	│ addons  │ addons-721094 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:53 UTC │ 29 Sep 25 10:54 UTC │
	│ addons  │ addons-721094 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:53 UTC │ 29 Sep 25 10:53 UTC │
	│ addons  │ addons-721094 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:53 UTC │ 29 Sep 25 10:53 UTC │
	│ ip      │ addons-721094 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-721094        │ jenkins │ v1.37.0 │ 29 Sep 25 10:55 UTC │ 29 Sep 25 10:55 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:49:49
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:49:49.142968  133836 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:49:49.143155  133836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:49:49.143167  133836 out.go:374] Setting ErrFile to fd 2...
	I0929 10:49:49.143174  133836 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:49:49.143436  133836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
	I0929 10:49:49.144038  133836 out.go:368] Setting JSON to false
	I0929 10:49:49.144953  133836 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1927,"bootTime":1759141062,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:49:49.145041  133836 start.go:140] virtualization: kvm guest
	I0929 10:49:49.146835  133836 out.go:179] * [addons-721094] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:49:49.148116  133836 notify.go:220] Checking for updates...
	I0929 10:49:49.148132  133836 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:49:49.149476  133836 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:49:49.150771  133836 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig
	I0929 10:49:49.151925  133836 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube
	I0929 10:49:49.153178  133836 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:49:49.154264  133836 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:49:49.155616  133836 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:49:49.179727  133836 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:49:49.179833  133836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:49:49.240284  133836 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-29 10:49:49.229042031 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:49:49.240388  133836 docker.go:318] overlay module found
	I0929 10:49:49.242158  133836 out.go:179] * Using the docker driver based on user configuration
	I0929 10:49:49.243235  133836 start.go:304] selected driver: docker
	I0929 10:49:49.243247  133836 start.go:924] validating driver "docker" against <nil>
	I0929 10:49:49.243258  133836 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:49:49.243737  133836 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:49:49.301083  133836 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-29 10:49:49.290622792 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:49:49.301274  133836 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:49:49.301478  133836 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:49:49.302973  133836 out.go:179] * Using Docker driver with root privileges
	I0929 10:49:49.305302  133836 cni.go:84] Creating CNI manager for ""
	I0929 10:49:49.305356  133836 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:49:49.305365  133836 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 10:49:49.305435  133836 start.go:348] cluster config:
	{Name:addons-721094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-721094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0929 10:49:49.306729  133836 out.go:179] * Starting "addons-721094" primary control-plane node in "addons-721094" cluster
	I0929 10:49:49.307767  133836 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 10:49:49.308880  133836 out.go:179] * Pulling base image v0.0.48 ...
	I0929 10:49:49.309888  133836 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:49:49.309916  133836 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:49:49.309924  133836 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-128977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 10:49:49.309932  133836 cache.go:58] Caching tarball of preloaded images
	I0929 10:49:49.310013  133836 preload.go:172] Found /home/jenkins/minikube-integration/21656-128977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 10:49:49.310024  133836 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 10:49:49.310288  133836 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/config.json ...
	I0929 10:49:49.310310  133836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/config.json: {Name:mk99ca576de1b939a130d3cd07d12791b4db588e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:49:49.326281  133836 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:49:49.326399  133836 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:49:49.326420  133836 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 10:49:49.326429  133836 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 10:49:49.326442  133836 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 10:49:49.326453  133836 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 10:50:01.921059  133836 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 10:50:01.921112  133836 cache.go:232] Successfully downloaded all kic artifacts
	I0929 10:50:01.921155  133836 start.go:360] acquireMachinesLock for addons-721094: {Name:mka618b20f58fb2801d716574321dbdd1a56e709 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:50:01.921279  133836 start.go:364] duration metric: took 96.941µs to acquireMachinesLock for "addons-721094"
	I0929 10:50:01.921311  133836 start.go:93] Provisioning new machine with config: &{Name:addons-721094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-721094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:50:01.921394  133836 start.go:125] createHost starting for "" (driver="docker")
	I0929 10:50:01.923521  133836 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 10:50:01.923774  133836 start.go:159] libmachine.API.Create for "addons-721094" (driver="docker")
	I0929 10:50:01.923817  133836 client.go:168] LocalClient.Create starting
	I0929 10:50:01.923959  133836 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21656-128977/.minikube/certs/ca.pem
	I0929 10:50:02.306838  133836 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21656-128977/.minikube/certs/cert.pem
	I0929 10:50:02.501220  133836 cli_runner.go:164] Run: docker network inspect addons-721094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 10:50:02.518789  133836 cli_runner.go:211] docker network inspect addons-721094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 10:50:02.518880  133836 network_create.go:284] running [docker network inspect addons-721094] to gather additional debugging logs...
	I0929 10:50:02.518907  133836 cli_runner.go:164] Run: docker network inspect addons-721094
	W0929 10:50:02.535764  133836 cli_runner.go:211] docker network inspect addons-721094 returned with exit code 1
	I0929 10:50:02.535800  133836 network_create.go:287] error running [docker network inspect addons-721094]: docker network inspect addons-721094: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-721094 not found
	I0929 10:50:02.535819  133836 network_create.go:289] output of [docker network inspect addons-721094]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-721094 not found
	
	** /stderr **
	I0929 10:50:02.535987  133836 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:50:02.553702  133836 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002283740}
	I0929 10:50:02.553748  133836 network_create.go:124] attempt to create docker network addons-721094 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 10:50:02.553792  133836 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-721094 addons-721094
	I0929 10:50:02.609654  133836 network_create.go:108] docker network addons-721094 192.168.49.0/24 created
	I0929 10:50:02.609701  133836 kic.go:121] calculated static IP "192.168.49.2" for the "addons-721094" container
	I0929 10:50:02.609769  133836 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 10:50:02.626848  133836 cli_runner.go:164] Run: docker volume create addons-721094 --label name.minikube.sigs.k8s.io=addons-721094 --label created_by.minikube.sigs.k8s.io=true
	I0929 10:50:02.646497  133836 oci.go:103] Successfully created a docker volume addons-721094
	I0929 10:50:02.646584  133836 cli_runner.go:164] Run: docker run --rm --name addons-721094-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-721094 --entrypoint /usr/bin/test -v addons-721094:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 10:50:07.532198  133836 cli_runner.go:217] Completed: docker run --rm --name addons-721094-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-721094 --entrypoint /usr/bin/test -v addons-721094:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (4.885569612s)
	I0929 10:50:07.532233  133836 oci.go:107] Successfully prepared a docker volume addons-721094
	I0929 10:50:07.532254  133836 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:50:07.532279  133836 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 10:50:07.532336  133836 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-128977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-721094:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 10:50:11.820585  133836 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21656-128977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-721094:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.288201263s)
	I0929 10:50:11.820623  133836 kic.go:203] duration metric: took 4.288343063s to extract preloaded images to volume ...
	W0929 10:50:11.820716  133836 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 10:50:11.820746  133836 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 10:50:11.820782  133836 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 10:50:11.871512  133836 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-721094 --name addons-721094 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-721094 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-721094 --network addons-721094 --ip 192.168.49.2 --volume addons-721094:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 10:50:12.139637  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Running}}
	I0929 10:50:12.158002  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:12.177382  133836 cli_runner.go:164] Run: docker exec addons-721094 stat /var/lib/dpkg/alternatives/iptables
	I0929 10:50:12.225070  133836 oci.go:144] the created container "addons-721094" has a running status.
	I0929 10:50:12.225105  133836 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa...
	I0929 10:50:12.673045  133836 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 10:50:12.698022  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:12.716177  133836 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 10:50:12.716205  133836 kic_runner.go:114] Args: [docker exec --privileged addons-721094 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 10:50:12.758754  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:12.777549  133836 machine.go:93] provisionDockerMachine start ...
	I0929 10:50:12.777654  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:12.795174  133836 main.go:141] libmachine: Using SSH client type: native
	I0929 10:50:12.795428  133836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I0929 10:50:12.795444  133836 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 10:50:12.931832  133836 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-721094
	
	I0929 10:50:12.931865  133836 ubuntu.go:182] provisioning hostname "addons-721094"
	I0929 10:50:12.931931  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:12.948472  133836 main.go:141] libmachine: Using SSH client type: native
	I0929 10:50:12.948711  133836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I0929 10:50:12.948729  133836 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-721094 && echo "addons-721094" | sudo tee /etc/hostname
	I0929 10:50:13.096146  133836 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-721094
	
	I0929 10:50:13.096218  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:13.114851  133836 main.go:141] libmachine: Using SSH client type: native
	I0929 10:50:13.115061  133836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I0929 10:50:13.115079  133836 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-721094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-721094/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-721094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 10:50:13.250308  133836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:50:13.250340  133836 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21656-128977/.minikube CaCertPath:/home/jenkins/minikube-integration/21656-128977/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21656-128977/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21656-128977/.minikube}
	I0929 10:50:13.250363  133836 ubuntu.go:190] setting up certificates
	I0929 10:50:13.250374  133836 provision.go:84] configureAuth start
	I0929 10:50:13.250425  133836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-721094
	I0929 10:50:13.267303  133836 provision.go:143] copyHostCerts
	I0929 10:50:13.267374  133836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-128977/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21656-128977/.minikube/ca.pem (1078 bytes)
	I0929 10:50:13.267488  133836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-128977/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21656-128977/.minikube/cert.pem (1123 bytes)
	I0929 10:50:13.267584  133836 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21656-128977/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21656-128977/.minikube/key.pem (1679 bytes)
	I0929 10:50:13.267646  133836 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21656-128977/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21656-128977/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21656-128977/.minikube/certs/ca-key.pem org=jenkins.addons-721094 san=[127.0.0.1 192.168.49.2 addons-721094 localhost minikube]
	I0929 10:50:13.376694  133836 provision.go:177] copyRemoteCerts
	I0929 10:50:13.376756  133836 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 10:50:13.376790  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:13.394993  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:13.491700  133836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-128977/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 10:50:13.518620  133836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-128977/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 10:50:13.542660  133836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-128977/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 10:50:13.566308  133836 provision.go:87] duration metric: took 315.919521ms to configureAuth
	I0929 10:50:13.566336  133836 ubuntu.go:206] setting minikube options for container-runtime
	I0929 10:50:13.566522  133836 config.go:182] Loaded profile config "addons-721094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:50:13.566637  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:13.585630  133836 main.go:141] libmachine: Using SSH client type: native
	I0929 10:50:13.585863  133836 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I0929 10:50:13.585882  133836 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 10:50:13.820779  133836 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 10:50:13.820808  133836 machine.go:96] duration metric: took 1.04323606s to provisionDockerMachine
	I0929 10:50:13.820818  133836 client.go:171] duration metric: took 11.896979442s to LocalClient.Create
	I0929 10:50:13.820853  133836 start.go:167] duration metric: took 11.897078778s to libmachine.API.Create "addons-721094"
	I0929 10:50:13.820863  133836 start.go:293] postStartSetup for "addons-721094" (driver="docker")
	I0929 10:50:13.820876  133836 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 10:50:13.820942  133836 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 10:50:13.820974  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:13.839184  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:13.936875  133836 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 10:50:13.940279  133836 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 10:50:13.940314  133836 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 10:50:13.940329  133836 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 10:50:13.940337  133836 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 10:50:13.940348  133836 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-128977/.minikube/addons for local assets ...
	I0929 10:50:13.940409  133836 filesync.go:126] Scanning /home/jenkins/minikube-integration/21656-128977/.minikube/files for local assets ...
	I0929 10:50:13.940432  133836 start.go:296] duration metric: took 119.559819ms for postStartSetup
	I0929 10:50:13.940720  133836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-721094
	I0929 10:50:13.958336  133836 profile.go:143] Saving config to /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/config.json ...
	I0929 10:50:13.958643  133836 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:50:13.958694  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:13.975956  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:14.067811  133836 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 10:50:14.072012  133836 start.go:128] duration metric: took 12.15060206s to createHost
	I0929 10:50:14.072038  133836 start.go:83] releasing machines lock for "addons-721094", held for 12.150744694s
	I0929 10:50:14.072102  133836 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-721094
	I0929 10:50:14.089958  133836 ssh_runner.go:195] Run: cat /version.json
	I0929 10:50:14.090009  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:14.090049  133836 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 10:50:14.090103  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:14.107746  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:14.108434  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:14.275898  133836 ssh_runner.go:195] Run: systemctl --version
	I0929 10:50:14.280414  133836 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 10:50:14.420060  133836 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 10:50:14.424603  133836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:50:14.445869  133836 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 10:50:14.445950  133836 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:50:14.475994  133836 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 10:50:14.476018  133836 start.go:495] detecting cgroup driver to use...
	I0929 10:50:14.476053  133836 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 10:50:14.476095  133836 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 10:50:14.490837  133836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:50:14.502033  133836 docker.go:218] disabling cri-docker service (if available) ...
	I0929 10:50:14.502093  133836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 10:50:14.515331  133836 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 10:50:14.529309  133836 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 10:50:14.594930  133836 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 10:50:14.667239  133836 docker.go:234] disabling docker service ...
	I0929 10:50:14.667298  133836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 10:50:14.684463  133836 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 10:50:14.695791  133836 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 10:50:14.761720  133836 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 10:50:14.877811  133836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 10:50:14.889406  133836 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:50:14.905397  133836 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 10:50:14.905454  133836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:50:14.918980  133836 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 10:50:14.919043  133836 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:50:14.929347  133836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:50:14.939798  133836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:50:14.949975  133836 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 10:50:14.959510  133836 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:50:14.969720  133836 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:50:14.986517  133836 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:50:14.997058  133836 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 10:50:15.005541  133836 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 10:50:15.014094  133836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:50:15.113933  133836 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 10:50:15.214335  133836 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 10:50:15.214422  133836 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 10:50:15.218594  133836 start.go:563] Will wait 60s for crictl version
	I0929 10:50:15.218649  133836 ssh_runner.go:195] Run: which crictl
	I0929 10:50:15.222551  133836 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 10:50:15.255978  133836 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 10:50:15.256108  133836 ssh_runner.go:195] Run: crio --version
	I0929 10:50:15.290096  133836 ssh_runner.go:195] Run: crio --version
	I0929 10:50:15.325463  133836 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 10:50:15.326667  133836 cli_runner.go:164] Run: docker network inspect addons-721094 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:50:15.343233  133836 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 10:50:15.347036  133836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:50:15.358424  133836 kubeadm.go:875] updating cluster {Name:addons-721094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-721094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 10:50:15.358521  133836 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:50:15.358561  133836 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:50:15.422802  133836 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 10:50:15.422839  133836 crio.go:433] Images already preloaded, skipping extraction
	I0929 10:50:15.422893  133836 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:50:15.455399  133836 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 10:50:15.455426  133836 cache_images.go:85] Images are preloaded, skipping loading
	I0929 10:50:15.455434  133836 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0929 10:50:15.455531  133836 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-721094 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-721094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 10:50:15.455659  133836 ssh_runner.go:195] Run: crio config
	I0929 10:50:15.496791  133836 cni.go:84] Creating CNI manager for ""
	I0929 10:50:15.496815  133836 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:50:15.496841  133836 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 10:50:15.496871  133836 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-721094 NodeName:addons-721094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 10:50:15.497024  133836 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-721094"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 10:50:15.497109  133836 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 10:50:15.506136  133836 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 10:50:15.506196  133836 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 10:50:15.514790  133836 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 10:50:15.532571  133836 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 10:50:15.552565  133836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0929 10:50:15.570340  133836 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 10:50:15.573800  133836 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:50:15.584577  133836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:50:15.650278  133836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:50:15.674382  133836 certs.go:68] Setting up /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094 for IP: 192.168.49.2
	I0929 10:50:15.674402  133836 certs.go:194] generating shared ca certs ...
	I0929 10:50:15.674419  133836 certs.go:226] acquiring lock for ca certs: {Name:mkc764614b47e03e9b95168b9aa46e116705eeb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:50:15.674545  133836 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21656-128977/.minikube/ca.key
	I0929 10:50:16.078801  133836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-128977/.minikube/ca.crt ...
	I0929 10:50:16.078846  133836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-128977/.minikube/ca.crt: {Name:mk96f01bbed219946ee6b52a5dd7ed3158bf9bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:50:16.079088  133836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-128977/.minikube/ca.key ...
	I0929 10:50:16.079110  133836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-128977/.minikube/ca.key: {Name:mk42679a2e9248dedbd3626adc3865190a688514 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:50:16.079250  133836 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21656-128977/.minikube/proxy-client-ca.key
	I0929 10:50:16.240419  133836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-128977/.minikube/proxy-client-ca.crt ...
	I0929 10:50:16.240453  133836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-128977/.minikube/proxy-client-ca.crt: {Name:mk06a1368cef7d4415cf4c8b7378811328dc2a5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:50:16.240680  133836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-128977/.minikube/proxy-client-ca.key ...
	I0929 10:50:16.240701  133836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-128977/.minikube/proxy-client-ca.key: {Name:mk586bf7413a8bca52dee697e9be30315009b0ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:50:16.240818  133836 certs.go:256] generating profile certs ...
	I0929 10:50:16.240916  133836 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.key
	I0929 10:50:16.240937  133836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt with IP's: []
	I0929 10:50:16.408880  133836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt ...
	I0929 10:50:16.408915  133836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: {Name:mk5d5369f53df4b550e0700870f0428028b2af4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:50:16.409132  133836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.key ...
	I0929 10:50:16.409154  133836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.key: {Name:mkbeaa158472921cb87c9306fbe5effbc575860b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:50:16.409276  133836 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/apiserver.key.645edb7e
	I0929 10:50:16.409309  133836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/apiserver.crt.645edb7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 10:50:16.608808  133836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/apiserver.crt.645edb7e ...
	I0929 10:50:16.608853  133836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/apiserver.crt.645edb7e: {Name:mk1b57e4d09a4252fb9ae5077da4349b956c78ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:50:16.609041  133836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/apiserver.key.645edb7e ...
	I0929 10:50:16.609056  133836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/apiserver.key.645edb7e: {Name:mk6bb07c1d3a3f133ad89cd2c1781399f93ec187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:50:16.609131  133836 certs.go:381] copying /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/apiserver.crt.645edb7e -> /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/apiserver.crt
	I0929 10:50:16.609205  133836 certs.go:385] copying /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/apiserver.key.645edb7e -> /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/apiserver.key
	I0929 10:50:16.609252  133836 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/proxy-client.key
	I0929 10:50:16.609268  133836 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/proxy-client.crt with IP's: []
	I0929 10:50:16.722074  133836 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/proxy-client.crt ...
	I0929 10:50:16.722104  133836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/proxy-client.crt: {Name:mk8e38416d4abe1b7aebd21bf3d6a49e2b6d8b87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:50:16.722263  133836 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/proxy-client.key ...
	I0929 10:50:16.722277  133836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/proxy-client.key: {Name:mkf21af4f597d234a14187284669d63c0367f1bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:50:16.722435  133836 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-128977/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 10:50:16.722470  133836 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-128977/.minikube/certs/ca.pem (1078 bytes)
	I0929 10:50:16.722495  133836 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-128977/.minikube/certs/cert.pem (1123 bytes)
	I0929 10:50:16.722517  133836 certs.go:484] found cert: /home/jenkins/minikube-integration/21656-128977/.minikube/certs/key.pem (1679 bytes)
	I0929 10:50:16.723147  133836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-128977/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 10:50:16.748398  133836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-128977/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 10:50:16.772380  133836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-128977/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 10:50:16.796962  133836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-128977/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 10:50:16.820874  133836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 10:50:16.844107  133836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 10:50:16.868126  133836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 10:50:16.891970  133836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 10:50:16.915454  133836 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21656-128977/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 10:50:16.941585  133836 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 10:50:16.959502  133836 ssh_runner.go:195] Run: openssl version
	I0929 10:50:16.964922  133836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 10:50:16.977032  133836 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:50:16.980492  133836 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:50 /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:50:16.980540  133836 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:50:16.987133  133836 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 10:50:16.996411  133836 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 10:50:16.999705  133836 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 10:50:16.999754  133836 kubeadm.go:392] StartCluster: {Name:addons-721094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-721094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:50:16.999866  133836 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 10:50:16.999917  133836 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 10:50:17.033763  133836 cri.go:89] found id: ""
	I0929 10:50:17.033853  133836 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 10:50:17.043376  133836 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 10:50:17.052790  133836 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 10:50:17.052866  133836 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 10:50:17.061710  133836 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 10:50:17.061727  133836 kubeadm.go:157] found existing configuration files:
	
	I0929 10:50:17.061767  133836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 10:50:17.070362  133836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 10:50:17.070411  133836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 10:50:17.078792  133836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 10:50:17.088310  133836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 10:50:17.088362  133836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 10:50:17.096495  133836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 10:50:17.105059  133836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 10:50:17.105125  133836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 10:50:17.113437  133836 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 10:50:17.121808  133836 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 10:50:17.121876  133836 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 10:50:17.130264  133836 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 10:50:17.166669  133836 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 10:50:17.166768  133836 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 10:50:17.190092  133836 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 10:50:17.190181  133836 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 10:50:17.190226  133836 kubeadm.go:310] OS: Linux
	I0929 10:50:17.190281  133836 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 10:50:17.190358  133836 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 10:50:17.190492  133836 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 10:50:17.190590  133836 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 10:50:17.190659  133836 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 10:50:17.190707  133836 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 10:50:17.190770  133836 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 10:50:17.190954  133836 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 10:50:17.243531  133836 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 10:50:17.243669  133836 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 10:50:17.243793  133836 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 10:50:17.251268  133836 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 10:50:17.253133  133836 out.go:252]   - Generating certificates and keys ...
	I0929 10:50:17.253253  133836 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 10:50:17.253335  133836 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 10:50:17.761086  133836 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 10:50:17.854958  133836 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 10:50:18.071436  133836 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 10:50:18.221600  133836 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 10:50:18.447747  133836 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 10:50:18.447883  133836 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-721094 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:50:18.550258  133836 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 10:50:18.550409  133836 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-721094 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:50:18.707988  133836 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 10:50:18.821423  133836 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 10:50:19.148416  133836 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 10:50:19.148539  133836 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 10:50:19.398680  133836 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 10:50:19.530649  133836 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 10:50:19.838880  133836 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 10:50:19.912734  133836 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 10:50:20.070574  133836 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 10:50:20.071116  133836 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 10:50:20.074539  133836 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 10:50:20.076088  133836 out.go:252]   - Booting up control plane ...
	I0929 10:50:20.076183  133836 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 10:50:20.076281  133836 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 10:50:20.077582  133836 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 10:50:20.086701  133836 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 10:50:20.086814  133836 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 10:50:20.092776  133836 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 10:50:20.093079  133836 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 10:50:20.093122  133836 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 10:50:20.166797  133836 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 10:50:20.166941  133836 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 10:50:21.168061  133836 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001481193s
	I0929 10:50:21.170894  133836 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 10:50:21.171028  133836 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 10:50:21.171158  133836 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 10:50:21.171306  133836 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 10:50:22.677597  133836 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.506658144s
	I0929 10:50:22.739587  133836 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 1.568731565s
	I0929 10:50:24.672917  133836 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 3.501932112s
	I0929 10:50:24.684151  133836 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 10:50:24.694111  133836 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 10:50:24.702051  133836 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 10:50:24.702306  133836 kubeadm.go:310] [mark-control-plane] Marking the node addons-721094 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 10:50:24.710556  133836 kubeadm.go:310] [bootstrap-token] Using token: q8oq56.yqbbdc1zl9kmkrdg
	I0929 10:50:24.711856  133836 out.go:252]   - Configuring RBAC rules ...
	I0929 10:50:24.712001  133836 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 10:50:24.714550  133836 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 10:50:24.719257  133836 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 10:50:24.721538  133836 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 10:50:24.724716  133836 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 10:50:24.726867  133836 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 10:50:25.079474  133836 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 10:50:25.495622  133836 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 10:50:26.079088  133836 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 10:50:26.079972  133836 kubeadm.go:310] 
	I0929 10:50:26.080054  133836 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 10:50:26.080083  133836 kubeadm.go:310] 
	I0929 10:50:26.080228  133836 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 10:50:26.080250  133836 kubeadm.go:310] 
	I0929 10:50:26.080303  133836 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 10:50:26.080375  133836 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 10:50:26.080427  133836 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 10:50:26.080433  133836 kubeadm.go:310] 
	I0929 10:50:26.080479  133836 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 10:50:26.080485  133836 kubeadm.go:310] 
	I0929 10:50:26.080540  133836 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 10:50:26.080549  133836 kubeadm.go:310] 
	I0929 10:50:26.080624  133836 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 10:50:26.080699  133836 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 10:50:26.080757  133836 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 10:50:26.080763  133836 kubeadm.go:310] 
	I0929 10:50:26.080865  133836 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 10:50:26.080974  133836 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 10:50:26.080991  133836 kubeadm.go:310] 
	I0929 10:50:26.081091  133836 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token q8oq56.yqbbdc1zl9kmkrdg \
	I0929 10:50:26.081182  133836 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:65decd7bdfe23eaa2ab17e3ecf46b313b6eec3cf1a0b4c783fb6c441d3f99e10 \
	I0929 10:50:26.081202  133836 kubeadm.go:310] 	--control-plane 
	I0929 10:50:26.081207  133836 kubeadm.go:310] 
	I0929 10:50:26.081273  133836 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 10:50:26.081279  133836 kubeadm.go:310] 
	I0929 10:50:26.081370  133836 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token q8oq56.yqbbdc1zl9kmkrdg \
	I0929 10:50:26.081513  133836 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:65decd7bdfe23eaa2ab17e3ecf46b313b6eec3cf1a0b4c783fb6c441d3f99e10 
	I0929 10:50:26.083712  133836 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 10:50:26.083849  133836 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 10:50:26.083902  133836 cni.go:84] Creating CNI manager for ""
	I0929 10:50:26.083916  133836 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:50:26.085605  133836 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 10:50:26.086712  133836 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 10:50:26.090633  133836 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 10:50:26.090649  133836 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 10:50:26.109760  133836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 10:50:26.311007  133836 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 10:50:26.311095  133836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:50:26.311109  133836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-721094 minikube.k8s.io/updated_at=2025_09_29T10_50_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8 minikube.k8s.io/name=addons-721094 minikube.k8s.io/primary=true
	I0929 10:50:26.318865  133836 ops.go:34] apiserver oom_adj: -16
	I0929 10:50:26.384606  133836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:50:26.885563  133836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:50:27.384691  133836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:50:27.885563  133836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:50:28.385448  133836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:50:28.884671  133836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:50:29.385026  133836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:50:29.885470  133836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:50:30.385262  133836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:50:30.884980  133836 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:50:30.968942  133836 kubeadm.go:1105] duration metric: took 4.65791064s to wait for elevateKubeSystemPrivileges
	I0929 10:50:30.969002  133836 kubeadm.go:394] duration metric: took 13.969252531s to StartCluster
	I0929 10:50:30.969034  133836 settings.go:142] acquiring lock: {Name:mk39f5124378722608e243ace207ba4137d3ae24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:50:30.969185  133836 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21656-128977/kubeconfig
	I0929 10:50:30.969535  133836 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21656-128977/kubeconfig: {Name:mkd98746ee20caeb113b9e306cb5a01dc05364db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:50:30.970568  133836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 10:50:30.970597  133836 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:50:30.970691  133836 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 10:50:30.970833  133836 config.go:182] Loaded profile config "addons-721094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:50:30.970844  133836 addons.go:69] Setting yakd=true in profile "addons-721094"
	I0929 10:50:30.970858  133836 addons.go:69] Setting inspektor-gadget=true in profile "addons-721094"
	I0929 10:50:30.970874  133836 addons.go:238] Setting addon yakd=true in "addons-721094"
	I0929 10:50:30.970881  133836 addons.go:69] Setting cloud-spanner=true in profile "addons-721094"
	I0929 10:50:30.970885  133836 addons.go:238] Setting addon inspektor-gadget=true in "addons-721094"
	I0929 10:50:30.970894  133836 addons.go:238] Setting addon cloud-spanner=true in "addons-721094"
	I0929 10:50:30.970889  133836 addons.go:69] Setting default-storageclass=true in profile "addons-721094"
	I0929 10:50:30.970912  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.970916  133836 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-721094"
	I0929 10:50:30.970914  133836 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-721094"
	I0929 10:50:30.970928  133836 addons.go:69] Setting ingress-dns=true in profile "addons-721094"
	I0929 10:50:30.970931  133836 addons.go:69] Setting ingress=true in profile "addons-721094"
	I0929 10:50:30.970943  133836 addons.go:238] Setting addon ingress-dns=true in "addons-721094"
	I0929 10:50:30.970946  133836 addons.go:69] Setting metrics-server=true in profile "addons-721094"
	I0929 10:50:30.970949  133836 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-721094"
	I0929 10:50:30.970953  133836 addons.go:238] Setting addon ingress=true in "addons-721094"
	I0929 10:50:30.970958  133836 addons.go:238] Setting addon metrics-server=true in "addons-721094"
	I0929 10:50:30.970967  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.970976  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.970980  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.970981  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.971248  133836 addons.go:69] Setting gcp-auth=true in profile "addons-721094"
	I0929 10:50:30.971270  133836 mustload.go:65] Loading cluster: addons-721094
	I0929 10:50:30.971340  133836 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-721094"
	I0929 10:50:30.971355  133836 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-721094"
	I0929 10:50:30.971459  133836 config.go:182] Loaded profile config "addons-721094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:50:30.971481  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.971503  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.971553  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.971609  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.971692  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.972005  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.972301  133836 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-721094"
	I0929 10:50:30.972395  133836 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-721094"
	I0929 10:50:30.972439  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.972505  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.972492  133836 addons.go:69] Setting storage-provisioner=true in profile "addons-721094"
	I0929 10:50:30.972535  133836 addons.go:238] Setting addon storage-provisioner=true in "addons-721094"
	I0929 10:50:30.972568  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.973103  133836 addons.go:69] Setting volcano=true in profile "addons-721094"
	I0929 10:50:30.973127  133836 addons.go:238] Setting addon volcano=true in "addons-721094"
	I0929 10:50:30.973151  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.970918  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.973526  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.970938  133836 addons.go:69] Setting registry-creds=true in profile "addons-721094"
	I0929 10:50:30.974592  133836 addons.go:238] Setting addon registry-creds=true in "addons-721094"
	I0929 10:50:30.974630  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.973839  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.974905  133836 out.go:179] * Verifying Kubernetes components...
	I0929 10:50:30.973854  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.973880  133836 addons.go:69] Setting registry=true in profile "addons-721094"
	I0929 10:50:30.975806  133836 addons.go:238] Setting addon registry=true in "addons-721094"
	I0929 10:50:30.975858  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.973910  133836 addons.go:69] Setting volumesnapshots=true in profile "addons-721094"
	I0929 10:50:30.975957  133836 addons.go:238] Setting addon volumesnapshots=true in "addons-721094"
	I0929 10:50:30.975989  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.970919  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.970926  133836 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-721094"
	I0929 10:50:30.976260  133836 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-721094"
	I0929 10:50:30.976288  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:30.978415  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.978449  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.978899  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.979629  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.982292  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.982845  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.983289  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:30.986985  133836 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:50:31.030569  133836 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 10:50:31.032129  133836 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 10:50:31.032156  133836 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 10:50:31.032245  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.035467  133836 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 10:50:31.038186  133836 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 10:50:31.038221  133836 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 10:50:31.038287  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.042308  133836 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 10:50:31.043558  133836 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 10:50:31.043580  133836 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 10:50:31.043668  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.044070  133836 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-721094"
	I0929 10:50:31.044339  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:31.045601  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	W0929 10:50:31.051131  133836 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 10:50:31.051953  133836 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 10:50:31.056504  133836 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 10:50:31.056527  133836 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 10:50:31.057831  133836 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 10:50:31.057854  133836 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 10:50:31.057923  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.058896  133836 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 10:50:31.059263  133836 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:50:31.059283  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 10:50:31.059337  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.064574  133836 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 10:50:31.065012  133836 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:50:31.066944  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:31.067085  133836 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 10:50:31.068860  133836 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 10:50:31.068945  133836 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:50:31.070962  133836 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:50:31.070998  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 10:50:31.071052  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.072325  133836 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 10:50:31.073836  133836 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 10:50:31.075949  133836 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:50:31.075971  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 10:50:31.076031  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.076193  133836 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 10:50:31.076202  133836 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 10:50:31.077797  133836 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 10:50:31.083973  133836 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:50:31.084003  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 10:50:31.084075  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.088810  133836 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 10:50:31.088843  133836 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 10:50:31.090358  133836 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 10:50:31.090437  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 10:50:31.090674  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.091642  133836 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 10:50:31.094025  133836 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 10:50:31.094116  133836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 10:50:31.094220  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.096192  133836 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 10:50:31.099985  133836 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 10:50:31.100015  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 10:50:31.100091  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.111083  133836 addons.go:238] Setting addon default-storageclass=true in "addons-721094"
	I0929 10:50:31.111135  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:31.111642  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:31.113644  133836 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 10:50:31.113717  133836 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 10:50:31.114375  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.115561  133836 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:50:31.115583  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 10:50:31.115649  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.118377  133836 out.go:179]   - Using image docker.io/busybox:stable
	I0929 10:50:31.120816  133836 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:50:31.120990  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 10:50:31.121908  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.122750  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.142573  133836 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 10:50:31.144414  133836 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:50:31.144436  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 10:50:31.144497  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.148269  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.148871  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.150282  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.152726  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.156535  133836 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 10:50:31.156595  133836 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 10:50:31.156681  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:31.163928  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.164724  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.165904  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.184071  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.185611  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.198610  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.200862  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.202319  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	W0929 10:50:31.203212  133836 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0929 10:50:31.203252  133836 retry.go:31] will retry after 167.011006ms: ssh: handshake failed: EOF
	I0929 10:50:31.210065  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:31.215575  133836 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 10:50:31.223996  133836 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:50:31.278487  133836 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 10:50:31.278520  133836 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 10:50:31.280799  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:50:31.304351  133836 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 10:50:31.304380  133836 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 10:50:31.317782  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:50:31.321979  133836 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 10:50:31.322007  133836 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 10:50:31.335612  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 10:50:31.337233  133836 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 10:50:31.337259  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 10:50:31.352099  133836 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:50:31.352123  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 10:50:31.363853  133836 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 10:50:31.363932  133836 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 10:50:31.366160  133836 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 10:50:31.366186  133836 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 10:50:31.375309  133836 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 10:50:31.375402  133836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 10:50:31.392129  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 10:50:31.392898  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:50:31.398663  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:50:31.399334  133836 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 10:50:31.399356  133836 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 10:50:31.401401  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:50:31.403074  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:50:31.410331  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:50:31.420190  133836 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 10:50:31.420226  133836 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 10:50:31.444449  133836 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 10:50:31.444499  133836 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 10:50:31.450187  133836 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:50:31.450218  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 10:50:31.452793  133836 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 10:50:31.452906  133836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 10:50:31.477141  133836 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 10:50:31.477241  133836 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 10:50:31.491353  133836 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:50:31.491381  133836 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 10:50:31.498112  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:50:31.523617  133836 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:50:31.523729  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 10:50:31.532810  133836 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 10:50:31.532855  133836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 10:50:31.540857  133836 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:50:31.540883  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 10:50:31.566776  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:50:31.596154  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:50:31.609174  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:50:31.621924  133836 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 10:50:31.621954  133836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 10:50:31.641565  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:50:31.660570  133836 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 10:50:31.661999  133836 node_ready.go:35] waiting up to 6m0s for node "addons-721094" to be "Ready" ...
	I0929 10:50:31.704785  133836 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 10:50:31.704814  133836 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 10:50:31.768222  133836 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 10:50:31.768246  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 10:50:31.841583  133836 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 10:50:31.841697  133836 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 10:50:31.927543  133836 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 10:50:31.927571  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 10:50:32.001924  133836 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 10:50:32.001954  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 10:50:32.066805  133836 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:50:32.066844  133836 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 10:50:32.123544  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:50:32.181797  133836 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-721094" context rescaled to 1 replicas
	I0929 10:50:32.511978  133836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.194155646s)
	I0929 10:50:32.512015  133836 addons.go:479] Verifying addon ingress=true in "addons-721094"
	I0929 10:50:32.512052  133836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.176402014s)
	I0929 10:50:32.512131  133836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.119204659s)
	I0929 10:50:32.512094  133836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.11992492s)
	I0929 10:50:32.512205  133836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.110776768s)
	I0929 10:50:32.512177  133836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.113471039s)
	I0929 10:50:32.512298  133836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.109195081s)
	W0929 10:50:32.512323  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:32.512343  133836 retry.go:31] will retry after 130.154824ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:32.512378  133836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.102021175s)
	I0929 10:50:32.512416  133836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.014278007s)
	I0929 10:50:32.512435  133836 addons.go:479] Verifying addon registry=true in "addons-721094"
	I0929 10:50:32.512555  133836 addons.go:479] Verifying addon metrics-server=true in "addons-721094"
	I0929 10:50:32.514940  133836 out.go:179] * Verifying ingress addon...
	I0929 10:50:32.515733  133836 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-721094 service yakd-dashboard -n yakd-dashboard
	
	I0929 10:50:32.515736  133836 out.go:179] * Verifying registry addon...
	I0929 10:50:32.517298  133836 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 10:50:32.517298  133836 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 10:50:32.522370  133836 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:50:32.522390  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:32.522929  133836 kapi.go:86] Found 2 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 10:50:32.522945  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:50:32.523324  133836 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0929 10:50:32.643385  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:50:33.023732  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:33.023853  133836 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 10:50:33.023878  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:33.076486  133836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.434795335s)
	W0929 10:50:33.076536  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:50:33.076559  133836 retry.go:31] will retry after 230.007322ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:50:33.076790  133836 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-721094"
	I0929 10:50:33.078297  133836 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 10:50:33.080248  133836 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 10:50:33.089527  133836 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:50:33.089553  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:33.289933  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:33.289962  133836 retry.go:31] will retry after 499.067131ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:33.307155  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:50:33.520502  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:33.520603  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:33.582981  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:33.665265  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:50:33.789870  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:50:34.020339  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:34.020466  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:34.083409  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:34.520616  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:34.520863  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:34.582997  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:35.020702  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:35.020819  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:35.121218  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:35.520620  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:35.520747  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:35.621921  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:35.665390  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:50:35.775913  133836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.468700229s)
	I0929 10:50:35.775981  133836 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.986072113s)
	W0929 10:50:35.776023  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:35.776046  133836 retry.go:31] will retry after 508.828464ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:36.020733  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:36.020905  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:36.122109  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:36.285298  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:50:36.520651  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:36.520815  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:36.582758  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:36.820022  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:36.820061  133836 retry.go:31] will retry after 820.059601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:37.020278  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:37.020519  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:37.121236  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:37.520721  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:37.520836  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:37.583560  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:37.640702  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 10:50:37.665647  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:50:38.020637  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:38.020854  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:38.121637  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:38.170793  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:38.170843  133836 retry.go:31] will retry after 990.896042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:38.520672  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:38.520818  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:38.583114  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:38.672798  133836 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 10:50:38.672891  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:38.690613  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:38.797491  133836 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 10:50:38.816641  133836 addons.go:238] Setting addon gcp-auth=true in "addons-721094"
	I0929 10:50:38.816714  133836 host.go:66] Checking if "addons-721094" exists ...
	I0929 10:50:38.817104  133836 cli_runner.go:164] Run: docker container inspect addons-721094 --format={{.State.Status}}
	I0929 10:50:38.835221  133836 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 10:50:38.835273  133836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-721094
	I0929 10:50:38.852272  133836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/addons-721094/id_rsa Username:docker}
	I0929 10:50:38.944458  133836 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:50:38.945662  133836 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 10:50:38.946686  133836 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 10:50:38.946702  133836 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 10:50:38.965016  133836 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 10:50:38.965039  133836 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 10:50:38.982544  133836 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:50:38.982567  133836 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 10:50:39.000355  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:50:39.021266  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:39.021405  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:39.083702  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:39.161955  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:50:39.318255  133836 addons.go:479] Verifying addon gcp-auth=true in "addons-721094"
	I0929 10:50:39.319399  133836 out.go:179] * Verifying gcp-auth addon...
	I0929 10:50:39.321220  133836 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 10:50:39.323672  133836 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 10:50:39.323690  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:39.520768  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:39.520978  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:39.584607  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:39.719992  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:39.720030  133836 retry.go:31] will retry after 1.6634913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:39.824445  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:40.020016  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:40.020149  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:40.083804  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:40.165332  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:50:40.324019  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:40.520724  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:40.520973  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:40.583315  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:40.823897  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:41.020435  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:41.020525  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:41.083334  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:41.324146  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:41.384346  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:50:41.521350  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:41.521507  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:41.582873  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:41.823569  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 10:50:41.921724  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:41.921750  133836 retry.go:31] will retry after 1.67816738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:42.020351  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:42.020483  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:42.082897  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:42.324845  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:42.520624  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:42.520735  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:42.583099  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:42.664579  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:50:42.824079  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:43.020889  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:43.020987  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:43.083442  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:43.325402  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:43.519980  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:43.520279  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:43.583493  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:43.600949  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:50:43.824045  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:44.021667  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:44.021807  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:44.083193  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:44.129932  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:44.129960  133836 retry.go:31] will retry after 4.572703016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:44.324236  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:44.520814  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:44.521112  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:44.583234  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:44.664737  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:50:44.824075  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:45.020969  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:45.021081  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:45.083505  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:45.324990  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:45.520583  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:45.520730  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:45.583179  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:45.824205  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:46.021024  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:46.021271  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:46.083488  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:46.324511  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:46.520137  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:46.520203  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:46.583711  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:46.665109  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:50:46.824793  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:47.020444  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:47.020591  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:47.082988  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:47.324084  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:47.520731  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:47.520789  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:47.582938  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:47.823681  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:48.020370  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:48.020547  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:48.083066  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:48.324437  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:48.519955  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:48.519991  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:48.583469  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:48.703159  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:50:48.823968  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:49.020374  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:49.020437  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:49.083558  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:49.164383  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	W0929 10:50:49.232954  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:49.232983  133836 retry.go:31] will retry after 7.43415465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:49.324557  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:49.520440  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:49.520573  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:49.583078  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:49.824223  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:50.020725  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:50.020731  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:50.083151  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:50.324155  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:50.521037  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:50.521090  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:50.583464  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:50.824881  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:51.020405  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:51.020511  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:51.083262  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:51.164959  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:50:51.324338  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:51.520146  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:51.520230  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:51.583910  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:51.823654  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:52.020456  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:52.020702  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:52.082958  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:52.325300  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:52.520203  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:52.520300  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:52.583748  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:52.823490  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:53.020542  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:53.020605  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:53.083183  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:53.324094  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:53.521114  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:53.521186  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:53.583876  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:53.665355  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:50:53.823509  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:54.020459  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:54.020688  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:54.083100  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:54.324390  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:54.519975  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:54.520118  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:54.585189  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:54.824082  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:55.020800  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:55.020998  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:55.083597  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:55.324542  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:55.520353  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:55.520385  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:55.582957  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:55.665492  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:50:55.823628  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:56.020538  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:56.020704  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:56.083331  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:56.324795  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:56.521126  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:56.521190  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:56.583570  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:56.668107  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:50:56.824174  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:57.020957  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:57.021040  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:57.083258  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:57.209668  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:57.209714  133836 retry.go:31] will retry after 10.292554266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:50:57.324119  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:57.520870  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:57.520956  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:57.583624  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:57.824363  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:58.019914  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:58.019951  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:58.083190  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:50:58.164537  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:50:58.324111  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:58.520741  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:58.520919  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:58.583444  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:58.824360  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:59.019891  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:59.020010  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:59.083049  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:59.324306  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:50:59.520072  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:50:59.520180  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:50:59.583894  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:50:59.824465  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:00.022605  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:00.022729  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:00.083594  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:51:00.165112  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:51:00.324474  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:00.520314  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:00.520454  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:00.582940  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:00.823474  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:01.020221  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:01.020259  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:01.082912  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:01.324038  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:01.520774  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:01.520851  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:01.583160  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:01.824221  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:02.019978  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:02.020090  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:02.083367  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:02.324439  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:02.519773  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:02.519891  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:02.583328  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:51:02.664857  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:51:02.823950  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:03.020910  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:03.021012  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:03.083605  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:03.324617  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:03.520381  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:03.520439  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:03.583875  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:03.824274  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:04.019924  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:04.020113  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:04.083610  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:04.324486  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:04.520196  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:04.520254  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:04.583721  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:51:04.665309  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:51:04.823414  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:05.020027  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:05.020211  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:05.083462  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:05.324430  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:05.519984  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:05.520135  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:05.583429  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:05.824018  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:06.020950  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:06.021110  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:06.083391  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:06.324340  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:06.520151  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:06.520196  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:06.583599  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:06.824584  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:07.020185  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:07.020235  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:07.083763  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:51:07.165328  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:51:07.323706  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:07.502965  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:51:07.520168  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:07.520204  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:07.583654  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:07.824088  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:08.019851  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:08.019947  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:51:08.024319  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:51:08.024345  133836 retry.go:31] will retry after 14.836775476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:51:08.082703  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:08.324552  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:08.520532  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:08.520721  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:08.582812  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:08.823757  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:09.020369  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:09.020527  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:09.082767  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:51:09.165478  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:51:09.323811  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:09.520525  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:09.520674  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:09.583240  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:09.823897  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:10.020750  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:10.020904  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:10.083700  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:10.324558  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:10.520436  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:10.520574  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:10.582907  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:10.823177  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:11.021239  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:11.021295  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:11.083008  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:51:11.165587  133836 node_ready.go:57] node "addons-721094" has "Ready":"False" status (will retry)
	I0929 10:51:11.324116  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:11.520754  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:11.520997  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:11.583338  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:11.829248  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:12.020011  133836 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:51:12.020030  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:12.020035  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:12.084298  133836 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:51:12.084327  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:12.164240  133836 node_ready.go:49] node "addons-721094" is "Ready"
	I0929 10:51:12.164272  133836 node_ready.go:38] duration metric: took 40.502237186s for node "addons-721094" to be "Ready" ...
	I0929 10:51:12.164290  133836 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:51:12.164346  133836 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:51:12.178001  133836 api_server.go:72] duration metric: took 41.207363037s to wait for apiserver process to appear ...
	I0929 10:51:12.178046  133836 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:51:12.178089  133836 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 10:51:12.183803  133836 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 10:51:12.184951  133836 api_server.go:141] control plane version: v1.34.0
	I0929 10:51:12.184978  133836 api_server.go:131] duration metric: took 6.923175ms to wait for apiserver health ...
	I0929 10:51:12.184990  133836 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:51:12.189098  133836 system_pods.go:59] 20 kube-system pods found
	I0929 10:51:12.189130  133836 system_pods.go:61] "amd-gpu-device-plugin-h8thq" [39eb999d-7a68-4e10-a475-ec78d5b61aa3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:51:12.189137  133836 system_pods.go:61] "coredns-66bc5c9577-dbsnv" [4505282e-615d-4744-90c6-79da2a6f6b22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:51:12.189145  133836 system_pods.go:61] "csi-hostpath-attacher-0" [8394476c-a9f5-48a6-9ca2-c234c17f2955] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:51:12.189151  133836 system_pods.go:61] "csi-hostpath-resizer-0" [508dc6c4-4092-42d0-87eb-275dddb1c5e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:51:12.189160  133836 system_pods.go:61] "csi-hostpathplugin-c4btq" [cc3cda0f-9924-413c-baeb-d8bce5343c05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:51:12.189166  133836 system_pods.go:61] "etcd-addons-721094" [6e1abbfe-1526-4540-a583-0720f4b49f58] Running
	I0929 10:51:12.189175  133836 system_pods.go:61] "kindnet-kpbkj" [1e2598cb-0455-45dc-973b-610f3c4d5d7a] Running
	I0929 10:51:12.189180  133836 system_pods.go:61] "kube-apiserver-addons-721094" [2e45594a-0ae1-4483-8703-b8d6a391e38d] Running
	I0929 10:51:12.189186  133836 system_pods.go:61] "kube-controller-manager-addons-721094" [3e70a582-5a3c-47a7-be38-8a1f3698f0aa] Running
	I0929 10:51:12.189194  133836 system_pods.go:61] "kube-ingress-dns-minikube" [6e33c1b6-0101-44ca-82c7-105e3f5d905b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:51:12.189203  133836 system_pods.go:61] "kube-proxy-bv2hp" [49dc1fef-c9f5-4525-96c3-7caa00dbff9b] Running
	I0929 10:51:12.189207  133836 system_pods.go:61] "kube-scheduler-addons-721094" [66d271af-f46d-41cd-b186-d972da730004] Running
	I0929 10:51:12.189211  133836 system_pods.go:61] "metrics-server-85b7d694d7-9lqfj" [dd50327c-15f5-41a0-9172-bbbb6eae1d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:51:12.189217  133836 system_pods.go:61] "nvidia-device-plugin-daemonset-4b4ln" [e0b7d031-c996-4cc7-ad5b-1e9b0536acb3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:51:12.189227  133836 system_pods.go:61] "registry-66898fdd98-rlbb5" [583b7d01-6d10-4a27-bc85-640cbbe0d7a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:51:12.189233  133836 system_pods.go:61] "registry-creds-764b6fb674-brbgp" [dd374d01-ec6d-433f-b1b0-a9bcda5e2fbd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:51:12.189240  133836 system_pods.go:61] "registry-proxy-9pbvq" [123da109-669b-43b6-8733-9d3cc0ef882d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:51:12.189248  133836 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gwrqn" [da316204-47b2-4b1f-8d87-aca44b06241b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:51:12.189259  133836 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mlnw2" [62933cff-ee27-4343-a735-9af919850e16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:51:12.189272  133836 system_pods.go:61] "storage-provisioner" [1f795f03-29f6-467e-aa36-428410620864] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:51:12.189283  133836 system_pods.go:74] duration metric: took 4.28427ms to wait for pod list to return data ...
	I0929 10:51:12.189296  133836 default_sa.go:34] waiting for default service account to be created ...
	I0929 10:51:12.191557  133836 default_sa.go:45] found service account: "default"
	I0929 10:51:12.191588  133836 default_sa.go:55] duration metric: took 2.274299ms for default service account to be created ...
	I0929 10:51:12.191599  133836 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 10:51:12.195110  133836 system_pods.go:86] 20 kube-system pods found
	I0929 10:51:12.195148  133836 system_pods.go:89] "amd-gpu-device-plugin-h8thq" [39eb999d-7a68-4e10-a475-ec78d5b61aa3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:51:12.195159  133836 system_pods.go:89] "coredns-66bc5c9577-dbsnv" [4505282e-615d-4744-90c6-79da2a6f6b22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:51:12.195169  133836 system_pods.go:89] "csi-hostpath-attacher-0" [8394476c-a9f5-48a6-9ca2-c234c17f2955] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:51:12.195178  133836 system_pods.go:89] "csi-hostpath-resizer-0" [508dc6c4-4092-42d0-87eb-275dddb1c5e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:51:12.195187  133836 system_pods.go:89] "csi-hostpathplugin-c4btq" [cc3cda0f-9924-413c-baeb-d8bce5343c05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:51:12.195195  133836 system_pods.go:89] "etcd-addons-721094" [6e1abbfe-1526-4540-a583-0720f4b49f58] Running
	I0929 10:51:12.195203  133836 system_pods.go:89] "kindnet-kpbkj" [1e2598cb-0455-45dc-973b-610f3c4d5d7a] Running
	I0929 10:51:12.195213  133836 system_pods.go:89] "kube-apiserver-addons-721094" [2e45594a-0ae1-4483-8703-b8d6a391e38d] Running
	I0929 10:51:12.195218  133836 system_pods.go:89] "kube-controller-manager-addons-721094" [3e70a582-5a3c-47a7-be38-8a1f3698f0aa] Running
	I0929 10:51:12.195229  133836 system_pods.go:89] "kube-ingress-dns-minikube" [6e33c1b6-0101-44ca-82c7-105e3f5d905b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:51:12.195235  133836 system_pods.go:89] "kube-proxy-bv2hp" [49dc1fef-c9f5-4525-96c3-7caa00dbff9b] Running
	I0929 10:51:12.195241  133836 system_pods.go:89] "kube-scheduler-addons-721094" [66d271af-f46d-41cd-b186-d972da730004] Running
	I0929 10:51:12.195252  133836 system_pods.go:89] "metrics-server-85b7d694d7-9lqfj" [dd50327c-15f5-41a0-9172-bbbb6eae1d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:51:12.195262  133836 system_pods.go:89] "nvidia-device-plugin-daemonset-4b4ln" [e0b7d031-c996-4cc7-ad5b-1e9b0536acb3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:51:12.195273  133836 system_pods.go:89] "registry-66898fdd98-rlbb5" [583b7d01-6d10-4a27-bc85-640cbbe0d7a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:51:12.195280  133836 system_pods.go:89] "registry-creds-764b6fb674-brbgp" [dd374d01-ec6d-433f-b1b0-a9bcda5e2fbd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:51:12.195288  133836 system_pods.go:89] "registry-proxy-9pbvq" [123da109-669b-43b6-8733-9d3cc0ef882d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:51:12.195295  133836 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gwrqn" [da316204-47b2-4b1f-8d87-aca44b06241b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:51:12.195304  133836 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mlnw2" [62933cff-ee27-4343-a735-9af919850e16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:51:12.195309  133836 system_pods.go:89] "storage-provisioner" [1f795f03-29f6-467e-aa36-428410620864] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:51:12.195326  133836 retry.go:31] will retry after 228.125677ms: missing components: kube-dns
	I0929 10:51:12.324280  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:12.431445  133836 system_pods.go:86] 20 kube-system pods found
	I0929 10:51:12.431496  133836 system_pods.go:89] "amd-gpu-device-plugin-h8thq" [39eb999d-7a68-4e10-a475-ec78d5b61aa3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:51:12.431509  133836 system_pods.go:89] "coredns-66bc5c9577-dbsnv" [4505282e-615d-4744-90c6-79da2a6f6b22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:51:12.431522  133836 system_pods.go:89] "csi-hostpath-attacher-0" [8394476c-a9f5-48a6-9ca2-c234c17f2955] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:51:12.431531  133836 system_pods.go:89] "csi-hostpath-resizer-0" [508dc6c4-4092-42d0-87eb-275dddb1c5e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:51:12.431541  133836 system_pods.go:89] "csi-hostpathplugin-c4btq" [cc3cda0f-9924-413c-baeb-d8bce5343c05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:51:12.431550  133836 system_pods.go:89] "etcd-addons-721094" [6e1abbfe-1526-4540-a583-0720f4b49f58] Running
	I0929 10:51:12.431558  133836 system_pods.go:89] "kindnet-kpbkj" [1e2598cb-0455-45dc-973b-610f3c4d5d7a] Running
	I0929 10:51:12.431565  133836 system_pods.go:89] "kube-apiserver-addons-721094" [2e45594a-0ae1-4483-8703-b8d6a391e38d] Running
	I0929 10:51:12.431580  133836 system_pods.go:89] "kube-controller-manager-addons-721094" [3e70a582-5a3c-47a7-be38-8a1f3698f0aa] Running
	I0929 10:51:12.431590  133836 system_pods.go:89] "kube-ingress-dns-minikube" [6e33c1b6-0101-44ca-82c7-105e3f5d905b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:51:12.431596  133836 system_pods.go:89] "kube-proxy-bv2hp" [49dc1fef-c9f5-4525-96c3-7caa00dbff9b] Running
	I0929 10:51:12.431604  133836 system_pods.go:89] "kube-scheduler-addons-721094" [66d271af-f46d-41cd-b186-d972da730004] Running
	I0929 10:51:12.431613  133836 system_pods.go:89] "metrics-server-85b7d694d7-9lqfj" [dd50327c-15f5-41a0-9172-bbbb6eae1d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:51:12.431624  133836 system_pods.go:89] "nvidia-device-plugin-daemonset-4b4ln" [e0b7d031-c996-4cc7-ad5b-1e9b0536acb3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:51:12.431634  133836 system_pods.go:89] "registry-66898fdd98-rlbb5" [583b7d01-6d10-4a27-bc85-640cbbe0d7a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:51:12.431643  133836 system_pods.go:89] "registry-creds-764b6fb674-brbgp" [dd374d01-ec6d-433f-b1b0-a9bcda5e2fbd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:51:12.431653  133836 system_pods.go:89] "registry-proxy-9pbvq" [123da109-669b-43b6-8733-9d3cc0ef882d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:51:12.431665  133836 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gwrqn" [da316204-47b2-4b1f-8d87-aca44b06241b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:51:12.431676  133836 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mlnw2" [62933cff-ee27-4343-a735-9af919850e16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:51:12.431685  133836 system_pods.go:89] "storage-provisioner" [1f795f03-29f6-467e-aa36-428410620864] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:51:12.431707  133836 retry.go:31] will retry after 299.866322ms: missing components: kube-dns
	I0929 10:51:12.528044  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:12.528045  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:12.628390  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:12.736618  133836 system_pods.go:86] 20 kube-system pods found
	I0929 10:51:12.736658  133836 system_pods.go:89] "amd-gpu-device-plugin-h8thq" [39eb999d-7a68-4e10-a475-ec78d5b61aa3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:51:12.736670  133836 system_pods.go:89] "coredns-66bc5c9577-dbsnv" [4505282e-615d-4744-90c6-79da2a6f6b22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:51:12.736681  133836 system_pods.go:89] "csi-hostpath-attacher-0" [8394476c-a9f5-48a6-9ca2-c234c17f2955] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:51:12.736690  133836 system_pods.go:89] "csi-hostpath-resizer-0" [508dc6c4-4092-42d0-87eb-275dddb1c5e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:51:12.736703  133836 system_pods.go:89] "csi-hostpathplugin-c4btq" [cc3cda0f-9924-413c-baeb-d8bce5343c05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:51:12.736713  133836 system_pods.go:89] "etcd-addons-721094" [6e1abbfe-1526-4540-a583-0720f4b49f58] Running
	I0929 10:51:12.736726  133836 system_pods.go:89] "kindnet-kpbkj" [1e2598cb-0455-45dc-973b-610f3c4d5d7a] Running
	I0929 10:51:12.736731  133836 system_pods.go:89] "kube-apiserver-addons-721094" [2e45594a-0ae1-4483-8703-b8d6a391e38d] Running
	I0929 10:51:12.736741  133836 system_pods.go:89] "kube-controller-manager-addons-721094" [3e70a582-5a3c-47a7-be38-8a1f3698f0aa] Running
	I0929 10:51:12.736750  133836 system_pods.go:89] "kube-ingress-dns-minikube" [6e33c1b6-0101-44ca-82c7-105e3f5d905b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:51:12.736758  133836 system_pods.go:89] "kube-proxy-bv2hp" [49dc1fef-c9f5-4525-96c3-7caa00dbff9b] Running
	I0929 10:51:12.736764  133836 system_pods.go:89] "kube-scheduler-addons-721094" [66d271af-f46d-41cd-b186-d972da730004] Running
	I0929 10:51:12.736770  133836 system_pods.go:89] "metrics-server-85b7d694d7-9lqfj" [dd50327c-15f5-41a0-9172-bbbb6eae1d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:51:12.736779  133836 system_pods.go:89] "nvidia-device-plugin-daemonset-4b4ln" [e0b7d031-c996-4cc7-ad5b-1e9b0536acb3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:51:12.736792  133836 system_pods.go:89] "registry-66898fdd98-rlbb5" [583b7d01-6d10-4a27-bc85-640cbbe0d7a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:51:12.736800  133836 system_pods.go:89] "registry-creds-764b6fb674-brbgp" [dd374d01-ec6d-433f-b1b0-a9bcda5e2fbd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:51:12.736811  133836 system_pods.go:89] "registry-proxy-9pbvq" [123da109-669b-43b6-8733-9d3cc0ef882d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:51:12.736841  133836 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gwrqn" [da316204-47b2-4b1f-8d87-aca44b06241b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:51:12.736851  133836 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mlnw2" [62933cff-ee27-4343-a735-9af919850e16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:51:12.736862  133836 system_pods.go:89] "storage-provisioner" [1f795f03-29f6-467e-aa36-428410620864] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:51:12.736881  133836 retry.go:31] will retry after 390.779118ms: missing components: kube-dns
	I0929 10:51:12.824063  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:13.020801  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:13.020984  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:13.083748  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:13.132693  133836 system_pods.go:86] 20 kube-system pods found
	I0929 10:51:13.132734  133836 system_pods.go:89] "amd-gpu-device-plugin-h8thq" [39eb999d-7a68-4e10-a475-ec78d5b61aa3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:51:13.132745  133836 system_pods.go:89] "coredns-66bc5c9577-dbsnv" [4505282e-615d-4744-90c6-79da2a6f6b22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:51:13.132755  133836 system_pods.go:89] "csi-hostpath-attacher-0" [8394476c-a9f5-48a6-9ca2-c234c17f2955] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:51:13.132764  133836 system_pods.go:89] "csi-hostpath-resizer-0" [508dc6c4-4092-42d0-87eb-275dddb1c5e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:51:13.132774  133836 system_pods.go:89] "csi-hostpathplugin-c4btq" [cc3cda0f-9924-413c-baeb-d8bce5343c05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:51:13.132792  133836 system_pods.go:89] "etcd-addons-721094" [6e1abbfe-1526-4540-a583-0720f4b49f58] Running
	I0929 10:51:13.132804  133836 system_pods.go:89] "kindnet-kpbkj" [1e2598cb-0455-45dc-973b-610f3c4d5d7a] Running
	I0929 10:51:13.132809  133836 system_pods.go:89] "kube-apiserver-addons-721094" [2e45594a-0ae1-4483-8703-b8d6a391e38d] Running
	I0929 10:51:13.132815  133836 system_pods.go:89] "kube-controller-manager-addons-721094" [3e70a582-5a3c-47a7-be38-8a1f3698f0aa] Running
	I0929 10:51:13.132843  133836 system_pods.go:89] "kube-ingress-dns-minikube" [6e33c1b6-0101-44ca-82c7-105e3f5d905b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:51:13.132849  133836 system_pods.go:89] "kube-proxy-bv2hp" [49dc1fef-c9f5-4525-96c3-7caa00dbff9b] Running
	I0929 10:51:13.132857  133836 system_pods.go:89] "kube-scheduler-addons-721094" [66d271af-f46d-41cd-b186-d972da730004] Running
	I0929 10:51:13.132866  133836 system_pods.go:89] "metrics-server-85b7d694d7-9lqfj" [dd50327c-15f5-41a0-9172-bbbb6eae1d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:51:13.132880  133836 system_pods.go:89] "nvidia-device-plugin-daemonset-4b4ln" [e0b7d031-c996-4cc7-ad5b-1e9b0536acb3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:51:13.132891  133836 system_pods.go:89] "registry-66898fdd98-rlbb5" [583b7d01-6d10-4a27-bc85-640cbbe0d7a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:51:13.132899  133836 system_pods.go:89] "registry-creds-764b6fb674-brbgp" [dd374d01-ec6d-433f-b1b0-a9bcda5e2fbd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:51:13.132907  133836 system_pods.go:89] "registry-proxy-9pbvq" [123da109-669b-43b6-8733-9d3cc0ef882d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:51:13.132914  133836 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gwrqn" [da316204-47b2-4b1f-8d87-aca44b06241b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:51:13.132922  133836 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mlnw2" [62933cff-ee27-4343-a735-9af919850e16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:51:13.132933  133836 system_pods.go:89] "storage-provisioner" [1f795f03-29f6-467e-aa36-428410620864] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:51:13.132952  133836 retry.go:31] will retry after 367.827159ms: missing components: kube-dns
	I0929 10:51:13.324687  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:13.506104  133836 system_pods.go:86] 20 kube-system pods found
	I0929 10:51:13.506146  133836 system_pods.go:89] "amd-gpu-device-plugin-h8thq" [39eb999d-7a68-4e10-a475-ec78d5b61aa3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:51:13.506158  133836 system_pods.go:89] "coredns-66bc5c9577-dbsnv" [4505282e-615d-4744-90c6-79da2a6f6b22] Running
	I0929 10:51:13.506168  133836 system_pods.go:89] "csi-hostpath-attacher-0" [8394476c-a9f5-48a6-9ca2-c234c17f2955] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:51:13.506175  133836 system_pods.go:89] "csi-hostpath-resizer-0" [508dc6c4-4092-42d0-87eb-275dddb1c5e3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:51:13.506183  133836 system_pods.go:89] "csi-hostpathplugin-c4btq" [cc3cda0f-9924-413c-baeb-d8bce5343c05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:51:13.506190  133836 system_pods.go:89] "etcd-addons-721094" [6e1abbfe-1526-4540-a583-0720f4b49f58] Running
	I0929 10:51:13.506200  133836 system_pods.go:89] "kindnet-kpbkj" [1e2598cb-0455-45dc-973b-610f3c4d5d7a] Running
	I0929 10:51:13.506206  133836 system_pods.go:89] "kube-apiserver-addons-721094" [2e45594a-0ae1-4483-8703-b8d6a391e38d] Running
	I0929 10:51:13.506220  133836 system_pods.go:89] "kube-controller-manager-addons-721094" [3e70a582-5a3c-47a7-be38-8a1f3698f0aa] Running
	I0929 10:51:13.506230  133836 system_pods.go:89] "kube-ingress-dns-minikube" [6e33c1b6-0101-44ca-82c7-105e3f5d905b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:51:13.506235  133836 system_pods.go:89] "kube-proxy-bv2hp" [49dc1fef-c9f5-4525-96c3-7caa00dbff9b] Running
	I0929 10:51:13.506241  133836 system_pods.go:89] "kube-scheduler-addons-721094" [66d271af-f46d-41cd-b186-d972da730004] Running
	I0929 10:51:13.506248  133836 system_pods.go:89] "metrics-server-85b7d694d7-9lqfj" [dd50327c-15f5-41a0-9172-bbbb6eae1d89] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:51:13.506255  133836 system_pods.go:89] "nvidia-device-plugin-daemonset-4b4ln" [e0b7d031-c996-4cc7-ad5b-1e9b0536acb3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:51:13.506265  133836 system_pods.go:89] "registry-66898fdd98-rlbb5" [583b7d01-6d10-4a27-bc85-640cbbe0d7a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:51:13.506271  133836 system_pods.go:89] "registry-creds-764b6fb674-brbgp" [dd374d01-ec6d-433f-b1b0-a9bcda5e2fbd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:51:13.506283  133836 system_pods.go:89] "registry-proxy-9pbvq" [123da109-669b-43b6-8733-9d3cc0ef882d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:51:13.506292  133836 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gwrqn" [da316204-47b2-4b1f-8d87-aca44b06241b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:51:13.506312  133836 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mlnw2" [62933cff-ee27-4343-a735-9af919850e16] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:51:13.506320  133836 system_pods.go:89] "storage-provisioner" [1f795f03-29f6-467e-aa36-428410620864] Running
	I0929 10:51:13.506332  133836 system_pods.go:126] duration metric: took 1.314725312s to wait for k8s-apps to be running ...
	I0929 10:51:13.506344  133836 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 10:51:13.506401  133836 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:51:13.521060  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:13.521447  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:13.521742  133836 system_svc.go:56] duration metric: took 15.390885ms WaitForService to wait for kubelet
	I0929 10:51:13.521768  133836 kubeadm.go:578] duration metric: took 42.551136665s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:51:13.521795  133836 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:51:13.524093  133836 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 10:51:13.524121  133836 node_conditions.go:123] node cpu capacity is 8
	I0929 10:51:13.524139  133836 node_conditions.go:105] duration metric: took 2.333162ms to run NodePressure ...
	I0929 10:51:13.524157  133836 start.go:241] waiting for startup goroutines ...
	I0929 10:51:13.605040  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:13.824685  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:14.020912  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:14.020947  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:14.083592  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:14.324840  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:14.520602  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:14.520637  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:14.583274  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:14.823407  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:15.020120  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:15.020177  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:15.083946  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:15.324369  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:15.520276  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:15.520350  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:15.621037  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:15.824162  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:16.021184  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:16.021247  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:16.083943  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:16.324788  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:16.520566  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:16.520612  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:16.587140  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:16.825221  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:17.021238  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:17.021361  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:17.084511  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:17.324230  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:17.520384  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:17.520410  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:17.583073  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:17.824543  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:18.020343  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:18.020340  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:18.084085  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:18.324321  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:18.520197  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:18.520213  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:18.584209  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:18.824370  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:19.019936  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:19.020072  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:19.083469  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:19.324238  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:19.520144  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:19.520188  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:19.584106  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:19.823958  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:20.024279  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:20.024581  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:20.083947  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:20.324813  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:20.520881  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:20.521051  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:20.584809  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:20.824339  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:21.020224  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:21.020244  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:21.083810  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:21.324700  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:21.520912  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:21.520986  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:21.583751  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:21.823987  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:22.020490  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:22.020637  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:22.083673  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:22.324845  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:22.520847  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:22.520887  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:22.584215  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:22.824735  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:22.861805  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:51:23.020084  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:23.020121  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:23.084099  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:23.325197  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:23.520347  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:23.520449  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 10:51:23.579295  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:51:23.579339  133836 retry.go:31] will retry after 21.397658827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:51:23.584375  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:23.823591  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:24.020934  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:24.021023  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:24.083980  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:24.325154  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:24.521505  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:24.521528  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:24.623044  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:24.824109  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:25.021228  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:25.021252  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:25.084482  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:25.324593  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:25.520406  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:25.520422  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:25.583106  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:25.824574  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:26.020501  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:26.020576  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:26.083213  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:26.324085  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:26.520994  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:26.521029  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:26.584024  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:26.824025  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:27.020941  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:27.021010  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:27.084570  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:27.324839  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:27.521644  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:27.521774  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:27.584019  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:27.837884  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:28.021470  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:28.021482  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:28.084387  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:28.325054  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:28.520942  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:28.520975  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:28.583797  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:28.823974  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:29.020607  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:29.020645  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:29.083487  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:29.324392  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:29.520428  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:29.520508  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:29.583313  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:29.824068  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:30.021534  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:30.022280  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:30.084486  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:30.324283  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:30.521617  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:30.521731  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:30.624399  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:30.823283  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:31.020406  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:31.020648  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:31.083722  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:31.326190  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:31.521293  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:31.521324  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:31.584105  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:31.823914  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:32.020739  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:32.020915  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:32.083528  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:32.324141  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:32.520962  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:32.521017  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:32.583545  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:32.824165  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:33.020786  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:33.020794  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:33.083365  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:33.323843  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:33.520540  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:33.520607  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:33.583533  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:33.823833  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:34.020731  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:34.020851  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:34.121681  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:34.324607  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:34.520475  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:34.520665  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:34.621353  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:34.823363  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:35.020244  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:35.020421  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:35.084073  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:35.324612  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:35.520530  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:35.520561  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:35.583411  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:35.824558  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:36.020189  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:36.020216  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:36.084078  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:36.324583  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:36.520602  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:36.520702  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:36.583957  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:36.825264  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:37.021261  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:37.021386  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:37.084420  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:37.324073  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:37.521068  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:37.521233  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:37.583949  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:37.824866  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:38.020580  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:38.020751  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:38.083944  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:38.324743  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:38.520870  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:38.520920  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:38.583913  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:38.824233  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:39.021401  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:39.021462  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:39.086136  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:39.324432  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:39.520555  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:39.520592  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:39.583881  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:39.824116  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:40.021100  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:40.021214  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:40.084052  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:40.324327  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:40.520138  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:40.520305  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:40.584345  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:40.824691  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:41.020344  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:41.020503  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:41.083396  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:41.323942  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:41.522139  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:41.522295  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:41.584054  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:41.824639  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:42.020664  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:42.021053  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:42.121289  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:42.324206  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:42.520174  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:42.520216  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:42.583669  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:42.824533  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:43.020695  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:43.020784  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:43.083780  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:43.324342  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:43.520443  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:43.520579  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:43.583045  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:43.824498  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:44.020389  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:44.020661  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:44.083093  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:44.324414  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:44.520650  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:44.520704  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:44.583172  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:44.824791  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:44.978076  133836 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:51:45.020492  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:45.020575  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:45.121732  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:45.324511  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 10:51:45.519815  133836 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 10:51:45.519956  133836 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 10:51:45.520053  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:45.520223  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:45.583666  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:45.904426  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:46.020018  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:46.020073  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:46.083842  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:46.324477  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:46.520355  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:46.520462  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:46.584012  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:46.824419  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:47.020524  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:47.020626  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:47.083655  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:47.323974  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:47.520659  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:47.520753  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:47.583905  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:47.824429  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:48.020234  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:48.020333  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:48.084145  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:48.324332  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:48.520161  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:48.520190  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:48.595388  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:48.823977  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:49.021014  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:49.021052  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:49.083480  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:49.324199  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:49.521165  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:49.521224  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:49.584249  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:49.823434  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:50.020208  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:50.020230  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:50.083967  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:50.324809  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:50.520364  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:50.520446  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:50.584072  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:50.824323  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:51.020060  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:51.020115  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:51.084007  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:51.324266  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:51.520973  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:51.521228  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:51.583677  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:51.824008  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:52.021011  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:52.021112  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:52.083479  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:52.323954  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:52.521107  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:52.521209  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:52.584143  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:52.824575  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:53.020857  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:53.020868  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:53.083999  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:53.324809  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:53.521223  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:53.521381  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:53.584770  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:53.824866  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:54.020594  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:54.020608  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:54.083782  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:54.324963  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:54.521000  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:54.521121  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:54.584459  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:54.824020  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:55.021014  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:55.021140  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:55.084153  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:55.325162  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:55.521307  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:55.521504  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:55.584891  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:55.824565  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:56.025885  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:56.026083  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:56.083764  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:56.324256  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:56.520341  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:56.520507  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:56.583420  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:56.824548  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:57.020622  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:57.020889  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:57.083728  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:57.324692  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:57.520765  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:57.520866  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:57.583734  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:57.824140  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:58.021150  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:58.021269  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:58.084355  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:58.324148  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:58.521228  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:58.521317  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:58.584108  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:58.824806  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:59.020860  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:59.020907  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:59.083477  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:59.324630  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:51:59.520792  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:51:59.520923  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:51:59.609246  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:51:59.824713  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:00.020626  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:00.020666  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:52:00.083231  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:00.323813  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:00.520883  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:00.520890  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:52:00.583884  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:00.824264  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:01.019843  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:52:01.019957  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:01.084044  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:01.324354  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:01.520761  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:52:01.521014  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:01.583349  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:01.823899  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:02.020911  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:52:02.020929  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:02.084302  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:02.323980  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:02.520987  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:52:02.521019  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:02.583667  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:02.824231  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:03.021479  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:52:03.021548  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:03.083292  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:03.323704  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:03.520576  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:52:03.520702  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:03.583229  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:03.824747  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:04.020743  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:04.020742  133836 kapi.go:107] duration metric: took 1m31.503441393s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 10:52:04.083398  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:04.375498  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:04.520534  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:04.583271  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:04.824775  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:05.021062  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:05.083917  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:05.325177  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:05.520989  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:05.583797  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:05.824519  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:06.021120  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:06.084388  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:06.324117  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:06.520910  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:06.583697  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:06.891002  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:07.020705  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:07.083213  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:07.325066  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:07.521472  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:07.583072  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:07.824542  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:08.020441  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:08.085863  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:08.327198  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:08.521332  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:08.584287  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:08.825134  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:09.020958  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:09.083564  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:09.324230  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:09.521386  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:09.584099  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:09.824597  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:10.021065  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:10.083919  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:10.324410  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:10.520201  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:10.584216  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:10.824804  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:11.021134  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:11.084013  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:11.324434  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:11.520903  133836 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:52:11.622195  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:11.824380  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:12.020938  133836 kapi.go:107] duration metric: took 1m39.503632918s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 10:52:12.083768  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:12.324498  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:12.583975  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:12.824210  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:13.083552  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:13.324235  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:13.583515  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:13.824107  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:14.084747  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:14.325173  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:14.584577  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:14.824132  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:15.084590  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:15.324198  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:15.583846  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:15.824461  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:16.083713  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:16.324605  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:16.583608  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:16.824249  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:17.084405  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:17.323970  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:17.583991  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:17.824483  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:18.083504  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:18.324182  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:18.584629  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:18.824387  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:19.083416  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:19.324251  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:52:19.584116  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:19.824446  133836 kapi.go:107] duration metric: took 1m40.503222604s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 10:52:19.826193  133836 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-721094 cluster.
	I0929 10:52:19.827461  133836 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 10:52:19.828701  133836 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 10:52:20.084044  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:20.584548  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:21.084440  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:21.583757  133836 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:52:22.084287  133836 kapi.go:107] duration metric: took 1m49.004034853s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 10:52:22.086150  133836 out.go:179] * Enabled addons: amd-gpu-device-plugin, cloud-spanner, registry-creds, ingress-dns, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 10:52:22.087399  133836 addons.go:514] duration metric: took 1m51.116705617s for enable addons: enabled=[amd-gpu-device-plugin cloud-spanner registry-creds ingress-dns nvidia-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 10:52:22.087461  133836 start.go:246] waiting for cluster config update ...
	I0929 10:52:22.087489  133836 start.go:255] writing updated cluster config ...
	I0929 10:52:22.087784  133836 ssh_runner.go:195] Run: rm -f paused
	I0929 10:52:22.091767  133836 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:52:22.094869  133836 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dbsnv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:52:22.098571  133836 pod_ready.go:94] pod "coredns-66bc5c9577-dbsnv" is "Ready"
	I0929 10:52:22.098594  133836 pod_ready.go:86] duration metric: took 3.700688ms for pod "coredns-66bc5c9577-dbsnv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:52:22.107043  133836 pod_ready.go:83] waiting for pod "etcd-addons-721094" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:52:22.115799  133836 pod_ready.go:94] pod "etcd-addons-721094" is "Ready"
	I0929 10:52:22.115838  133836 pod_ready.go:86] duration metric: took 8.767264ms for pod "etcd-addons-721094" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:52:22.122603  133836 pod_ready.go:83] waiting for pod "kube-apiserver-addons-721094" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:52:22.127230  133836 pod_ready.go:94] pod "kube-apiserver-addons-721094" is "Ready"
	I0929 10:52:22.127260  133836 pod_ready.go:86] duration metric: took 4.627351ms for pod "kube-apiserver-addons-721094" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:52:22.129417  133836 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-721094" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:52:22.495355  133836 pod_ready.go:94] pod "kube-controller-manager-addons-721094" is "Ready"
	I0929 10:52:22.495379  133836 pod_ready.go:86] duration metric: took 365.941694ms for pod "kube-controller-manager-addons-721094" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:52:22.695124  133836 pod_ready.go:83] waiting for pod "kube-proxy-bv2hp" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:52:23.096066  133836 pod_ready.go:94] pod "kube-proxy-bv2hp" is "Ready"
	I0929 10:52:23.096099  133836 pod_ready.go:86] duration metric: took 400.943044ms for pod "kube-proxy-bv2hp" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:52:23.296645  133836 pod_ready.go:83] waiting for pod "kube-scheduler-addons-721094" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:52:23.695215  133836 pod_ready.go:94] pod "kube-scheduler-addons-721094" is "Ready"
	I0929 10:52:23.695241  133836 pod_ready.go:86] duration metric: took 398.570758ms for pod "kube-scheduler-addons-721094" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:52:23.695252  133836 pod_ready.go:40] duration metric: took 1.60345644s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:52:23.740267  133836 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 10:52:23.742239  133836 out.go:179] * Done! kubectl is now configured to use "addons-721094" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 10:54:25 addons-721094 crio[935]: time="2025-09-29 10:54:25.580801971Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 10:54:25 addons-721094 crio[935]: time="2025-09-29 10:54:25.580852394Z" level=info msg="Removed pod sandbox: 8af2cc9afb6feed08e892ddb46acdf48e31e4e8056d2b507261a66967af8fe61" id=a8613fc9-7581-4f26-856b-89122026a041 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:54:25 addons-721094 crio[935]: time="2025-09-29 10:54:25.581182259Z" level=info msg="Stopping pod sandbox: 9f1a61d6396d8f0c72fe0f9381d2911efb4695c814834c3f01a1296af2a3bef1" id=bebe47eb-8bd2-4474-940a-b0d83ac01a54 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:54:25 addons-721094 crio[935]: time="2025-09-29 10:54:25.581207524Z" level=info msg="Stopped pod sandbox (already stopped): 9f1a61d6396d8f0c72fe0f9381d2911efb4695c814834c3f01a1296af2a3bef1" id=bebe47eb-8bd2-4474-940a-b0d83ac01a54 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:54:25 addons-721094 crio[935]: time="2025-09-29 10:54:25.581422031Z" level=info msg="Removing pod sandbox: 9f1a61d6396d8f0c72fe0f9381d2911efb4695c814834c3f01a1296af2a3bef1" id=fbc56e2b-4727-4af4-932b-569ed184e171 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:54:25 addons-721094 crio[935]: time="2025-09-29 10:54:25.587547425Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 10:54:25 addons-721094 crio[935]: time="2025-09-29 10:54:25.587574412Z" level=info msg="Removed pod sandbox: 9f1a61d6396d8f0c72fe0f9381d2911efb4695c814834c3f01a1296af2a3bef1" id=fbc56e2b-4727-4af4-932b-569ed184e171 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:54:25 addons-721094 crio[935]: time="2025-09-29 10:54:25.587924036Z" level=info msg="Stopping pod sandbox: e3ddd361c12f488f3bcc7732775b202754cd9261e23f1f0d8533a8c1c7d9e938" id=aee82902-783c-438a-b687-72701a4d8b23 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:54:25 addons-721094 crio[935]: time="2025-09-29 10:54:25.587963752Z" level=info msg="Stopped pod sandbox (already stopped): e3ddd361c12f488f3bcc7732775b202754cd9261e23f1f0d8533a8c1c7d9e938" id=aee82902-783c-438a-b687-72701a4d8b23 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:54:25 addons-721094 crio[935]: time="2025-09-29 10:54:25.588317100Z" level=info msg="Removing pod sandbox: e3ddd361c12f488f3bcc7732775b202754cd9261e23f1f0d8533a8c1c7d9e938" id=ff521cc6-e85b-446f-8042-adb8f42a3d1a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:54:25 addons-721094 crio[935]: time="2025-09-29 10:54:25.594628013Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 10:54:25 addons-721094 crio[935]: time="2025-09-29 10:54:25.594658854Z" level=info msg="Removed pod sandbox: e3ddd361c12f488f3bcc7732775b202754cd9261e23f1f0d8533a8c1c7d9e938" id=ff521cc6-e85b-446f-8042-adb8f42a3d1a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:55:29 addons-721094 crio[935]: time="2025-09-29 10:55:29.330382802Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-wcnx8/POD" id=dd694e0a-097e-404c-aaab-76edcd7435ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 10:55:29 addons-721094 crio[935]: time="2025-09-29 10:55:29.330442882Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 10:55:29 addons-721094 crio[935]: time="2025-09-29 10:55:29.350524388Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-wcnx8 Namespace:default ID:f537389cbde33d926f961337217549f4fc1e488a3bbaf07e44a8be4463f8cab4 UID:7cd8d92d-6d8e-4543-967d-46dcc5d24da0 NetNS:/var/run/netns/c34173a6-a2e4-43d1-a8f2-6d60ba1c1169 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 10:55:29 addons-721094 crio[935]: time="2025-09-29 10:55:29.350566742Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-wcnx8 to CNI network \"kindnet\" (type=ptp)"
	Sep 29 10:55:29 addons-721094 crio[935]: time="2025-09-29 10:55:29.360743492Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-wcnx8 Namespace:default ID:f537389cbde33d926f961337217549f4fc1e488a3bbaf07e44a8be4463f8cab4 UID:7cd8d92d-6d8e-4543-967d-46dcc5d24da0 NetNS:/var/run/netns/c34173a6-a2e4-43d1-a8f2-6d60ba1c1169 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 10:55:29 addons-721094 crio[935]: time="2025-09-29 10:55:29.360915590Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-wcnx8 for CNI network kindnet (type=ptp)"
	Sep 29 10:55:29 addons-721094 crio[935]: time="2025-09-29 10:55:29.361677799Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 10:55:29 addons-721094 crio[935]: time="2025-09-29 10:55:29.362506903Z" level=info msg="Ran pod sandbox f537389cbde33d926f961337217549f4fc1e488a3bbaf07e44a8be4463f8cab4 with infra container: default/hello-world-app-5d498dc89-wcnx8/POD" id=dd694e0a-097e-404c-aaab-76edcd7435ab name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 10:55:29 addons-721094 crio[935]: time="2025-09-29 10:55:29.363560918Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=bce4c2a2-8fd4-4924-baf9-56a60120c4e8 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:55:29 addons-721094 crio[935]: time="2025-09-29 10:55:29.363813825Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=bce4c2a2-8fd4-4924-baf9-56a60120c4e8 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:55:29 addons-721094 crio[935]: time="2025-09-29 10:55:29.364429955Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=72c9ae29-8675-413e-a012-249caf9910f3 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:55:29 addons-721094 crio[935]: time="2025-09-29 10:55:29.367846024Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 29 10:55:30 addons-721094 crio[935]: time="2025-09-29 10:55:30.407228751Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cd841d990a9d4       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   58973f906609d       nginx
	b048831fba360       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   37f193468f510       busybox
	942cfbdbd7dcd       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   0d301e63ba533       ingress-nginx-controller-9cc49f96f-k7qpz
	86af275aa2b0a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            3 minutes ago       Running             gadget                    0                   3b6a83aeb1644       gadget-jmscx
	e846c254e2025       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago       Exited              patch                     0                   d3f1d308b53b1       ingress-nginx-admission-patch-2j6cm
	e37e80b6db974       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago       Exited              create                    0                   4153cbd7f8d99       ingress-nginx-admission-create-r95n4
	bc1ec81c8c475       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   8ab80d70720cb       kube-ingress-dns-minikube
	2d07d338f95c9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   8fc49991cc82b       coredns-66bc5c9577-dbsnv
	f5bff8e05b98e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   1fea79921b7d6       storage-provisioner
	e2e2841da1ad2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                             4 minutes ago       Running             kindnet-cni               0                   d5e4021663ef6       kindnet-kpbkj
	9ef4471f654bd       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             4 minutes ago       Running             kube-proxy                0                   e7e9bdca80a50       kube-proxy-bv2hp
	5e03baa330d42       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             5 minutes ago       Running             kube-controller-manager   0                   cc7e5c07c70c8       kube-controller-manager-addons-721094
	1ced7cc99c551       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   2a5b5c2ca2c07       etcd-addons-721094
	3672ac0879661       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             5 minutes ago       Running             kube-scheduler            0                   68c2fe16ddab0       kube-scheduler-addons-721094
	c0c74784b2783       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             5 minutes ago       Running             kube-apiserver            0                   ba0848d50c5b1       kube-apiserver-addons-721094
	
	
	==> coredns [2d07d338f95c90f0f15f577d49a87200e27d7e3615398a55398f5f8ee4e5bec6] <==
	[INFO] 10.244.0.18:56185 - 1616 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000080421s
	[INFO] 10.244.0.18:50154 - 63153 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000059262s
	[INFO] 10.244.0.18:50154 - 63430 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000105915s
	[INFO] 10.244.0.18:55763 - 50624 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000046331s
	[INFO] 10.244.0.18:55763 - 50414 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000068625s
	[INFO] 10.244.0.18:47626 - 56577 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000116503s
	[INFO] 10.244.0.18:47626 - 57050 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000151247s
	[INFO] 10.244.0.22:44709 - 61285 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000223461s
	[INFO] 10.244.0.22:50647 - 21566 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000128405s
	[INFO] 10.244.0.22:33184 - 37335 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000104708s
	[INFO] 10.244.0.22:55689 - 157 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149939s
	[INFO] 10.244.0.22:44377 - 44587 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009157s
	[INFO] 10.244.0.22:46650 - 18381 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000149763s
	[INFO] 10.244.0.22:46073 - 63250 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003447774s
	[INFO] 10.244.0.22:53766 - 29829 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003589389s
	[INFO] 10.244.0.22:50872 - 35582 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005818988s
	[INFO] 10.244.0.22:43543 - 12948 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005895797s
	[INFO] 10.244.0.22:34547 - 37747 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.003774143s
	[INFO] 10.244.0.22:45740 - 35818 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00770551s
	[INFO] 10.244.0.22:60635 - 61877 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004433577s
	[INFO] 10.244.0.22:44024 - 35919 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004442583s
	[INFO] 10.244.0.22:39666 - 12354 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.001208068s
	[INFO] 10.244.0.22:56629 - 758 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001964298s
	[INFO] 10.244.0.25:53247 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000239717s
	[INFO] 10.244.0.25:36201 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000174602s
	
	
	==> describe nodes <==
	Name:               addons-721094
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-721094
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=addons-721094
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_50_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-721094
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:50:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-721094
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:55:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:53:38 +0000   Mon, 29 Sep 2025 10:50:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:53:38 +0000   Mon, 29 Sep 2025 10:50:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:53:38 +0000   Mon, 29 Sep 2025 10:50:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:53:38 +0000   Mon, 29 Sep 2025 10:51:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-721094
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d12323abd2d4c5297685f81ba9ce70c
	  System UUID:                27a74b58-983d-45fa-8ad1-2e43cdb28eda
	  Boot ID:                    9688b1e6-202b-4b8e-99ec-d05348e21a34
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	  default                     hello-world-app-5d498dc89-wcnx8             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  gadget                      gadget-jmscx                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-k7qpz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m58s
	  kube-system                 coredns-66bc5c9577-dbsnv                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m
	  kube-system                 etcd-addons-721094                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m7s
	  kube-system                 kindnet-kpbkj                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m
	  kube-system                 kube-apiserver-addons-721094                250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-controller-manager-addons-721094       200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-proxy-bv2hp                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-scheduler-addons-721094                100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m58s  kube-proxy       
	  Normal  Starting                 5m5s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m5s   kubelet          Node addons-721094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s   kubelet          Node addons-721094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s   kubelet          Node addons-721094 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m1s   node-controller  Node addons-721094 event: Registered Node addons-721094 in Controller
	  Normal  NodeReady                4m19s  kubelet          Node addons-721094 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 b6 ca 06 2a 05 08 06
	[  +6.035985] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 73 34 16 44 a1 08 06
	[Sep29 10:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +1.017535] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +1.023856] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +1.023945] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000016] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +1.023900] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +2.047848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +4.031627] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +8.191397] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[ +16.382717] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[Sep29 10:54] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	
	
	==> etcd [1ced7cc99c5515c7b0a9398a6d9494ab176a51e8decd9548c1d116bd91461401] <==
	{"level":"warn","ts":"2025-09-29T10:50:22.101086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.108305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.116507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.126894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.136124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.150982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.157847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.163873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.170900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.176672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.183526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.189520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.196054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.202557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.209756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.218262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.233891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.241184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:22.247225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:33.529750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:33.536140Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:59.728173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:59.734623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:59.745419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:50:59.751535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42082","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:55:30 up 37 min,  0 users,  load average: 0.25, 0.71, 0.93
	Linux addons-721094 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [e2e2841da1ad29bbf4c91140a5bef52683527bd18f9b3acbecb1db66a60245bd] <==
	I0929 10:53:21.397586       1 main.go:301] handling current node
	I0929 10:53:31.398737       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:53:31.398773       1 main.go:301] handling current node
	I0929 10:53:41.398883       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:53:41.398946       1 main.go:301] handling current node
	I0929 10:53:51.398189       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:53:51.398218       1 main.go:301] handling current node
	I0929 10:54:01.398918       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:54:01.398962       1 main.go:301] handling current node
	I0929 10:54:11.402577       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:54:11.402625       1 main.go:301] handling current node
	I0929 10:54:21.397878       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:54:21.397914       1 main.go:301] handling current node
	I0929 10:54:31.405960       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:54:31.406000       1 main.go:301] handling current node
	I0929 10:54:41.401503       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:54:41.401532       1 main.go:301] handling current node
	I0929 10:54:51.405950       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:54:51.405988       1 main.go:301] handling current node
	I0929 10:55:01.405933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:55:01.405969       1 main.go:301] handling current node
	I0929 10:55:11.402283       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:55:11.402321       1 main.go:301] handling current node
	I0929 10:55:21.403507       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:55:21.403538       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c0c74784b27834f07e9662bb61daaf5e3060634b4be10140cd8c95a4e7aa7769] <==
	E0929 10:52:33.475063       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35568: use of closed network connection
	E0929 10:52:33.647422       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35590: use of closed network connection
	I0929 10:52:42.680693       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.235.63"}
	I0929 10:52:57.848265       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 10:52:58.037520       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.117.114"}
	I0929 10:53:07.636539       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:53:07.982370       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0929 10:53:21.563729       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0929 10:53:34.769973       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0929 10:53:36.763859       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:53:36.763913       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:53:36.781470       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:53:36.781512       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:53:36.798550       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:53:36.798696       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:53:36.817321       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:53:36.817370       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0929 10:53:37.783041       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0929 10:53:37.818269       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0929 10:53:37.923506       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0929 10:53:39.118776       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:54:20.963400       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:54:42.708252       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:55:26.631286       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:55:29.107327       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.76.117"}
	
	
	==> kube-controller-manager [5e03baa330d428d053c0a610b58bd47a0503f5405a5aee09a36c7172a53339d4] <==
	E0929 10:53:47.427897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:55.019053       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:55.020025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:56.904423       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:56.905377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:53:57.372276       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:53:57.373210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0929 10:53:59.848575       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0929 10:53:59.848717       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:53:59.865944       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0929 10:53:59.865988       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 10:54:13.495882       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:54:13.496863       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:54:15.692464       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:54:15.693308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:54:20.150273       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:54:20.151225       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:54:52.651073       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:54:52.652000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:54:58.186719       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:54:58.187658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:55:09.501049       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:55:09.502001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:55:23.073432       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:55:23.074301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [9ef4471f654bd38595a43d0ae9b53d3fc584996bb8451787bb2bad6fb234688d] <==
	I0929 10:50:31.047365       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:50:31.235251       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:50:31.335909       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:50:31.335956       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:50:31.336036       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:50:31.591661       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:50:31.591729       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:50:31.654098       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:50:31.654558       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:50:31.654604       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:50:31.657925       1 config.go:200] "Starting service config controller"
	I0929 10:50:31.658012       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:50:31.658100       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:50:31.658131       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:50:31.658163       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:50:31.658185       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:50:31.661969       1 config.go:309] "Starting node config controller"
	I0929 10:50:31.664169       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:50:31.664249       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:50:31.758885       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:50:31.759036       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:50:31.759138       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3672ac0879661ba748a3cb274a15a62189114703d6afff814017fc97aa778511] <==
	E0929 10:50:22.737883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:50:22.737886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:50:22.737935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:50:22.737976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 10:50:22.737984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:50:22.737995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:50:22.738036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:50:22.738076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:50:22.738080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:50:22.738101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 10:50:22.738180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:50:22.738181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 10:50:23.564189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:50:23.567015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:50:23.579039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:50:23.580917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:50:23.785841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:50:23.826880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 10:50:23.857083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 10:50:23.861269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:50:23.869284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:50:23.944574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 10:50:23.958693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:50:23.967878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I0929 10:50:25.835635       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:53:50 addons-721094 kubelet[1544]: I0929 10:53:50.014257    1544 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94021b5a39ad1581d3c858709bdc10b5a11583898da77e733abb2660bd47044e"} err="failed to get container status \"94021b5a39ad1581d3c858709bdc10b5a11583898da77e733abb2660bd47044e\": rpc error: code = NotFound desc = could not find container \"94021b5a39ad1581d3c858709bdc10b5a11583898da77e733abb2660bd47044e\": container with ID starting with 94021b5a39ad1581d3c858709bdc10b5a11583898da77e733abb2660bd47044e not found: ID does not exist"
	Sep 29 10:53:51 addons-721094 kubelet[1544]: I0929 10:53:51.320352    1544 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41a10abb-46d6-4617-a3be-dbff201d1ade" path="/var/lib/kubelet/pods/41a10abb-46d6-4617-a3be-dbff201d1ade/volumes"
	Sep 29 10:53:52 addons-721094 kubelet[1544]: I0929 10:53:52.318581    1544 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:53:55 addons-721094 kubelet[1544]: E0929 10:53:55.359846    1544 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143235359584531  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:53:55 addons-721094 kubelet[1544]: E0929 10:53:55.359872    1544 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143235359584531  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:54:05 addons-721094 kubelet[1544]: E0929 10:54:05.362480    1544 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143245362236399  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:54:05 addons-721094 kubelet[1544]: E0929 10:54:05.362508    1544 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143245362236399  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:54:15 addons-721094 kubelet[1544]: E0929 10:54:15.364951    1544 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143255364693640  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:54:15 addons-721094 kubelet[1544]: E0929 10:54:15.364986    1544 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143255364693640  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:54:25 addons-721094 kubelet[1544]: E0929 10:54:25.367248    1544 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143265367026191  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:54:25 addons-721094 kubelet[1544]: E0929 10:54:25.367281    1544 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143265367026191  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:54:35 addons-721094 kubelet[1544]: E0929 10:54:35.369908    1544 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143275369633575  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:54:35 addons-721094 kubelet[1544]: E0929 10:54:35.369944    1544 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143275369633575  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:54:45 addons-721094 kubelet[1544]: E0929 10:54:45.372616    1544 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143285372396085  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:54:45 addons-721094 kubelet[1544]: E0929 10:54:45.372645    1544 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143285372396085  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:54:55 addons-721094 kubelet[1544]: E0929 10:54:55.375380    1544 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143295375109094  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:54:55 addons-721094 kubelet[1544]: E0929 10:54:55.375419    1544 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143295375109094  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:55:03 addons-721094 kubelet[1544]: I0929 10:55:03.318531    1544 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:55:05 addons-721094 kubelet[1544]: E0929 10:55:05.378182    1544 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143305377857599  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:55:05 addons-721094 kubelet[1544]: E0929 10:55:05.378216    1544 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143305377857599  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:55:15 addons-721094 kubelet[1544]: E0929 10:55:15.380816    1544 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143315380568976  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:55:15 addons-721094 kubelet[1544]: E0929 10:55:15.380862    1544 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143315380568976  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:55:25 addons-721094 kubelet[1544]: E0929 10:55:25.383204    1544 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759143325382981524  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:55:25 addons-721094 kubelet[1544]: E0929 10:55:25.383235    1544 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759143325382981524  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:55:29 addons-721094 kubelet[1544]: I0929 10:55:29.062898    1544 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8g6k\" (UniqueName: \"kubernetes.io/projected/7cd8d92d-6d8e-4543-967d-46dcc5d24da0-kube-api-access-j8g6k\") pod \"hello-world-app-5d498dc89-wcnx8\" (UID: \"7cd8d92d-6d8e-4543-967d-46dcc5d24da0\") " pod="default/hello-world-app-5d498dc89-wcnx8"
	
	
	==> storage-provisioner [f5bff8e05b98e9aa165191d7da18ed02a096f8e8e650ced67fb413c19865c5a9] <==
	W0929 10:55:05.421633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:07.424972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:07.429396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:09.432387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:09.436376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:11.439138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:11.443751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:13.447527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:13.451216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:15.454577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:15.458399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:17.461043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:17.465949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:19.469075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:19.473575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:21.476533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:21.480542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:23.484020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:23.487746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:25.490509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:25.494254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:27.497421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:27.501454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:29.504522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:55:29.509146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-721094 -n addons-721094
helpers_test.go:269: (dbg) Run:  kubectl --context addons-721094 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-wcnx8 ingress-nginx-admission-create-r95n4 ingress-nginx-admission-patch-2j6cm
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-721094 describe pod hello-world-app-5d498dc89-wcnx8 ingress-nginx-admission-create-r95n4 ingress-nginx-admission-patch-2j6cm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-721094 describe pod hello-world-app-5d498dc89-wcnx8 ingress-nginx-admission-create-r95n4 ingress-nginx-admission-patch-2j6cm: exit status 1 (70.633814ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-wcnx8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-721094/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:55:29 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j8g6k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j8g6k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-wcnx8 to addons-721094
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-r95n4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2j6cm" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-721094 describe pod hello-world-app-5d498dc89-wcnx8 ingress-nginx-admission-create-r95n4 ingress-nginx-admission-patch-2j6cm: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-721094 addons disable ingress-dns --alsologtostderr -v=1: (1.664533542s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-721094 addons disable ingress --alsologtostderr -v=1: (7.659130262s)
--- FAIL: TestAddons/parallel/Ingress (163.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-992121 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-992121 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-zr7v6" [e6dced3d-0c5e-4b68-84a3-59861a33bc24] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
I0929 10:59:33.657115  132495 detect.go:223] nested VM detected
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-992121 -n functional-992121
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-29 11:09:33.848637752 +0000 UTC m=+1213.829695320
functional_test.go:1645: (dbg) Run:  kubectl --context functional-992121 describe po hello-node-connect-7d85dfc575-zr7v6 -n default
functional_test.go:1645: (dbg) kubectl --context functional-992121 describe po hello-node-connect-7d85dfc575-zr7v6 -n default:
Name:             hello-node-connect-7d85dfc575-zr7v6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-992121/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:59:33 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pkxxq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-pkxxq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zr7v6 to functional-992121
Normal   Pulling    7m3s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m3s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m54s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m54s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-992121 logs hello-node-connect-7d85dfc575-zr7v6 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-992121 logs hello-node-connect-7d85dfc575-zr7v6 -n default: exit status 1 (70.038908ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-zr7v6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-992121 logs hello-node-connect-7d85dfc575-zr7v6 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-992121 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-zr7v6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-992121/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:59:33 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pkxxq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-pkxxq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zr7v6 to functional-992121
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m55s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m55s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-992121 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-992121 logs -l app=hello-node-connect: exit status 1 (98.388031ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-zr7v6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-992121 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-992121 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.212.34
IPs:                      10.108.212.34
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31108/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-992121
helpers_test.go:243: (dbg) docker inspect functional-992121:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "455822b9f4a8ef4bf0902ba809bc0e83f2db607f7ee45a770c7a7f048730d148",
	        "Created": "2025-09-29T10:56:52.83906425Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 158975,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:56:52.879181755Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/455822b9f4a8ef4bf0902ba809bc0e83f2db607f7ee45a770c7a7f048730d148/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/455822b9f4a8ef4bf0902ba809bc0e83f2db607f7ee45a770c7a7f048730d148/hostname",
	        "HostsPath": "/var/lib/docker/containers/455822b9f4a8ef4bf0902ba809bc0e83f2db607f7ee45a770c7a7f048730d148/hosts",
	        "LogPath": "/var/lib/docker/containers/455822b9f4a8ef4bf0902ba809bc0e83f2db607f7ee45a770c7a7f048730d148/455822b9f4a8ef4bf0902ba809bc0e83f2db607f7ee45a770c7a7f048730d148-json.log",
	        "Name": "/functional-992121",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-992121:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-992121",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "455822b9f4a8ef4bf0902ba809bc0e83f2db607f7ee45a770c7a7f048730d148",
	                "LowerDir": "/var/lib/docker/overlay2/7086f7335b8615a446285e162777a5e76a5fb55c1c99f9123bb4d62a728d74aa-init/diff:/var/lib/docker/overlay2/6f46731317f9b9f8dbf1d4a7e01ff0254d8f3e30fed041625466f4497703adcb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7086f7335b8615a446285e162777a5e76a5fb55c1c99f9123bb4d62a728d74aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7086f7335b8615a446285e162777a5e76a5fb55c1c99f9123bb4d62a728d74aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7086f7335b8615a446285e162777a5e76a5fb55c1c99f9123bb4d62a728d74aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-992121",
	                "Source": "/var/lib/docker/volumes/functional-992121/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-992121",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-992121",
	                "name.minikube.sigs.k8s.io": "functional-992121",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "65431364ba8a89857e495e6a76df15ab58e9f1328829bfff99decf25aae9e7b5",
	            "SandboxKey": "/var/run/docker/netns/65431364ba8a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-992121": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:bc:63:a6:b7:de",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3423ba4c0a1d258fee1f775f1891c488786f4263f0e606ecf6d24e690614a4be",
	                    "EndpointID": "5f1cb817c55d1a693889280ad0c5da557e5548af9939bd65a477d36de6a60a78",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-992121",
	                        "455822b9f4a8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-992121 -n functional-992121
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-992121 logs -n 25: (1.442341416s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-992121 image save --daemon kicbase/echo-server:functional-992121 --alsologtostderr          │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ addons         │ functional-992121 addons list                                                                          │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ addons         │ functional-992121 addons list -o json                                                                  │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ ssh            │ functional-992121 ssh sudo cat /etc/ssl/certs/132495.pem                                               │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ ssh            │ functional-992121 ssh sudo cat /usr/share/ca-certificates/132495.pem                                   │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ ssh            │ functional-992121 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ ssh            │ functional-992121 ssh sudo cat /etc/ssl/certs/1324952.pem                                              │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ ssh            │ functional-992121 ssh sudo cat /usr/share/ca-certificates/1324952.pem                                  │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ ssh            │ functional-992121 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ ssh            │ functional-992121 ssh sudo cat /etc/test/nested/copy/132495/hosts                                      │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ image          │ functional-992121 image ls --format short --alsologtostderr                                            │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ image          │ functional-992121 image ls --format yaml --alsologtostderr                                             │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ ssh            │ functional-992121 ssh pgrep buildkitd                                                                  │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │                     │
	│ image          │ functional-992121 image build -t localhost/my-image:functional-992121 testdata/build --alsologtostderr │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ image          │ functional-992121 image ls                                                                             │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ image          │ functional-992121 image ls --format json --alsologtostderr                                             │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ image          │ functional-992121 image ls --format table --alsologtostderr                                            │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ update-context │ functional-992121 update-context --alsologtostderr -v=2                                                │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ update-context │ functional-992121 update-context --alsologtostderr -v=2                                                │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ update-context │ functional-992121 update-context --alsologtostderr -v=2                                                │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 10:59 UTC │ 29 Sep 25 10:59 UTC │
	│ service        │ functional-992121 service list                                                                         │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 11:09 UTC │ 29 Sep 25 11:09 UTC │
	│ service        │ functional-992121 service list -o json                                                                 │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 11:09 UTC │ 29 Sep 25 11:09 UTC │
	│ service        │ functional-992121 service --namespace=default --https --url hello-node                                 │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 11:09 UTC │                     │
	│ service        │ functional-992121 service hello-node --url --format={{.IP}}                                            │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 11:09 UTC │                     │
	│ service        │ functional-992121 service hello-node --url                                                             │ functional-992121 │ jenkins │ v1.37.0 │ 29 Sep 25 11:09 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:59:08
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:59:08.738851  169997 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:59:08.739240  169997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:59:08.739254  169997 out.go:374] Setting ErrFile to fd 2...
	I0929 10:59:08.739261  169997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:59:08.739560  169997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
	I0929 10:59:08.740121  169997 out.go:368] Setting JSON to false
	I0929 10:59:08.741195  169997 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2487,"bootTime":1759141062,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:59:08.741292  169997 start.go:140] virtualization: kvm guest
	I0929 10:59:08.745957  169997 out.go:179] * [functional-992121] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:59:08.747939  169997 notify.go:220] Checking for updates...
	I0929 10:59:08.747957  169997 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:59:08.749232  169997 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:59:08.750370  169997 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig
	I0929 10:59:08.751667  169997 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube
	I0929 10:59:08.752746  169997 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:59:08.753851  169997 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:59:08.755534  169997 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:59:08.756162  169997 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:59:08.782862  169997 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:59:08.783019  169997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:59:08.862533  169997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:59:08.845080873 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:59:08.862692  169997 docker.go:318] overlay module found
	I0929 10:59:08.865950  169997 out.go:179] * Using the docker driver based on existing profile
	I0929 10:59:08.867232  169997 start.go:304] selected driver: docker
	I0929 10:59:08.867252  169997 start.go:924] validating driver "docker" against &{Name:functional-992121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-992121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:59:08.867355  169997 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:59:08.867468  169997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:59:08.946142  169997 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:59:08.931808541 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:59:08.946920  169997 cni.go:84] Creating CNI manager for ""
	I0929 10:59:08.946989  169997 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:59:08.947065  169997 start.go:348] cluster config:
	{Name:functional-992121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-992121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:59:08.948975  169997 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 29 10:59:42 functional-992121 crio[4223]: time="2025-09-29 10:59:42.050439851Z" level=info msg="Removed pod sandbox: 36fc1c3dc64ca0fbca73a10245363276f28c032f3d1d6efc4f86e586a6a322e7" id=adb526f4-77e3-479b-9fca-9a36d73753b7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:59:42 functional-992121 crio[4223]: time="2025-09-29 10:59:42.050863288Z" level=info msg="Stopping pod sandbox: 5914b0c07e253ad6498eca9bdeb3abc2f1f7ba9761ef3e82d28d5a805fdb1e1f" id=84eb0f54-bfaf-4ea2-9874-83da7532efbf name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:59:42 functional-992121 crio[4223]: time="2025-09-29 10:59:42.050908146Z" level=info msg="Stopped pod sandbox (already stopped): 5914b0c07e253ad6498eca9bdeb3abc2f1f7ba9761ef3e82d28d5a805fdb1e1f" id=84eb0f54-bfaf-4ea2-9874-83da7532efbf name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:59:42 functional-992121 crio[4223]: time="2025-09-29 10:59:42.051242542Z" level=info msg="Removing pod sandbox: 5914b0c07e253ad6498eca9bdeb3abc2f1f7ba9761ef3e82d28d5a805fdb1e1f" id=2660219e-f964-4395-8f6b-9b1aeefb416d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:59:42 functional-992121 crio[4223]: time="2025-09-29 10:59:42.057356819Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 10:59:42 functional-992121 crio[4223]: time="2025-09-29 10:59:42.057390324Z" level=info msg="Removed pod sandbox: 5914b0c07e253ad6498eca9bdeb3abc2f1f7ba9761ef3e82d28d5a805fdb1e1f" id=2660219e-f964-4395-8f6b-9b1aeefb416d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:59:48 functional-992121 crio[4223]: time="2025-09-29 10:59:48.725319161Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb" id=f6becfff-1e00-4072-9a12-69912d1ae2fc name=/runtime.v1.ImageService/PullImage
	Sep 29 10:59:48 functional-992121 crio[4223]: time="2025-09-29 10:59:48.726004389Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=4ab43460-7b43-4c5c-a838-e1025613f62d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:59:48 functional-992121 crio[4223]: time="2025-09-29 10:59:48.727262151Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,RepoTags:[docker.io/library/mysql:5.7],RepoDigests:[docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da],Size_:519571821,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=4ab43460-7b43-4c5c-a838-e1025613f62d name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:59:48 functional-992121 crio[4223]: time="2025-09-29 10:59:48.727878773Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5ab3cae9-2086-493f-a379-4218ecdd9acf name=/runtime.v1.ImageService/PullImage
	Sep 29 10:59:48 functional-992121 crio[4223]: time="2025-09-29 10:59:48.728201579Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=22df11f2-6516-4b1c-bb6a-8c87eaa2eab4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:59:48 functional-992121 crio[4223]: time="2025-09-29 10:59:48.729584573Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,RepoTags:[docker.io/library/mysql:5.7],RepoDigests:[docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da],Size_:519571821,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=22df11f2-6516-4b1c-bb6a-8c87eaa2eab4 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:59:48 functional-992121 crio[4223]: time="2025-09-29 10:59:48.733253223Z" level=info msg="Creating container: default/mysql-5bb876957f-wkvlt/mysql" id=3a6e584e-ee25-43b2-b60f-b557053b2de0 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 10:59:48 functional-992121 crio[4223]: time="2025-09-29 10:59:48.733375640Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 10:59:48 functional-992121 crio[4223]: time="2025-09-29 10:59:48.815059419Z" level=info msg="Created container aafd3948a1043b266ce1f209904638300a260b95606dbdc527bfad8d1d36c715: default/mysql-5bb876957f-wkvlt/mysql" id=3a6e584e-ee25-43b2-b60f-b557053b2de0 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 10:59:48 functional-992121 crio[4223]: time="2025-09-29 10:59:48.815728011Z" level=info msg="Starting container: aafd3948a1043b266ce1f209904638300a260b95606dbdc527bfad8d1d36c715" id=9080a0bb-1763-462d-8bd1-a59a16402b17 name=/runtime.v1.RuntimeService/StartContainer
	Sep 29 10:59:48 functional-992121 crio[4223]: time="2025-09-29 10:59:48.823015988Z" level=info msg="Started container" PID=8621 containerID=aafd3948a1043b266ce1f209904638300a260b95606dbdc527bfad8d1d36c715 description=default/mysql-5bb876957f-wkvlt/mysql id=9080a0bb-1763-462d-8bd1-a59a16402b17 name=/runtime.v1.RuntimeService/StartContainer sandboxID=53ef8909a7267f0f5dfc6983e19e83fac41917f62e2b5d6ddb0bcb7d01844970
	Sep 29 10:59:52 functional-992121 crio[4223]: time="2025-09-29 10:59:52.052894387Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0b19ffcc-41d6-4491-a4e4-31d1890c6aaa name=/runtime.v1.ImageService/PullImage
	Sep 29 11:00:12 functional-992121 crio[4223]: time="2025-09-29 11:00:12.053917052Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8992fd9e-6e4f-4963-a45e-1a35f2bdd6bc name=/runtime.v1.ImageService/PullImage
	Sep 29 11:00:42 functional-992121 crio[4223]: time="2025-09-29 11:00:42.052131535Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c1e6889d-b78a-4244-8b77-325ab0a3ba27 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:00:58 functional-992121 crio[4223]: time="2025-09-29 11:00:58.052771921Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3071cf40-5795-446f-acd4-1e9d9a74030f name=/runtime.v1.ImageService/PullImage
	Sep 29 11:02:13 functional-992121 crio[4223]: time="2025-09-29 11:02:13.052172283Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=57ffe726-2e49-4f6e-9ec7-71d409b6f7e1 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:02:30 functional-992121 crio[4223]: time="2025-09-29 11:02:30.051946165Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0aaae0ed-4690-4998-8aeb-75dfddb9782c name=/runtime.v1.ImageService/PullImage
	Sep 29 11:05:01 functional-992121 crio[4223]: time="2025-09-29 11:05:01.052552788Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=009bd877-7146-40a5-b393-87b7ef2e72f4 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:05:15 functional-992121 crio[4223]: time="2025-09-29 11:05:15.052655252Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=47879d46-f9f8-4c46-8515-c018cbecadf8 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	aafd3948a1043       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  9 minutes ago       Running             mysql                       0                   53ef8909a7267       mysql-5bb876957f-wkvlt
	1ab37151e5fe9       docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285                  10 minutes ago      Running             myfrontend                  0                   7eb6f95a238a5       sp-pod
	8906fefbe92bd       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                  10 minutes ago      Running             nginx                       0                   142a95d72946d       nginx-svc
	719114b2cacd8       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         10 minutes ago      Running             kubernetes-dashboard        0                   5b82443729a30       kubernetes-dashboard-855c9754f9-rnnxg
	a0e16e41660f2       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   d894b4d441cc3       dashboard-metrics-scraper-77bf4d6c4c-tw4g8
	2e18fd950876d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   7142c2626360a       busybox-mount
	0f4f78c9f3b14       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                 10 minutes ago      Running             kube-apiserver              0                   5eb43681a8c16       kube-apiserver-functional-992121
	e0814a8a9485f       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 10 minutes ago      Running             kube-controller-manager     2                   c95267b3ea600       kube-controller-manager-functional-992121
	ab58a776e6691       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 10 minutes ago      Running             kube-scheduler              1                   53797addf5bec       kube-scheduler-functional-992121
	09b9224c8c9fe       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Running             etcd                        1                   50e9b9f5b012c       etcd-functional-992121
	bb724c19e10e4       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 11 minutes ago      Exited              kube-controller-manager     1                   c95267b3ea600       kube-controller-manager-functional-992121
	2f738c958775d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   24538a7bcd0a1       kindnet-gncrk
	85f428d1b9a1e       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 11 minutes ago      Running             kube-proxy                  1                   1ff71868af4a4       kube-proxy-z2m4r
	a03b74b4099be       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   4241d020c8d56       coredns-66bc5c9577-n5dz8
	5971ed2af0b6d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         1                   af6e744d88e78       storage-provisioner
	1d6c0aed32b1c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   4241d020c8d56       coredns-66bc5c9577-n5dz8
	0795ed6bb4cfd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   af6e744d88e78       storage-provisioner
	f701efdb742f9       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 12 minutes ago      Exited              kube-proxy                  0                   1ff71868af4a4       kube-proxy-z2m4r
	95f552c1ec6f7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   24538a7bcd0a1       kindnet-gncrk
	f8b669e93fbfc       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   50e9b9f5b012c       etcd-functional-992121
	4dbd347d5cd9a       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 12 minutes ago      Exited              kube-scheduler              0                   53797addf5bec       kube-scheduler-functional-992121
	
	
	==> coredns [1d6c0aed32b1c811f01934bd0fd2309308f82808817c89221296d6f807f10d84] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52349 - 11555 "HINFO IN 1153482510153523166.7512127756361832282. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.061094919s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a03b74b4099bee6864665e619ef055c9278609cadd2b798c80eb0af371fa6934] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53265 - 30377 "HINFO IN 2661641482448390440.5572207491627990120. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017605766s
	
	
	==> describe nodes <==
	Name:               functional-992121
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-992121
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c1f958e1d15faaa2b94ae7399d1155627e45fcf8
	                    minikube.k8s.io/name=functional-992121
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_57_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:57:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-992121
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:09:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:05:41 +0000   Mon, 29 Sep 2025 10:57:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:05:41 +0000   Mon, 29 Sep 2025 10:57:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:05:41 +0000   Mon, 29 Sep 2025 10:57:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:05:41 +0000   Mon, 29 Sep 2025 10:57:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-992121
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 803b2c52f3714772821deddbc5fd1426
	  System UUID:                4be4656a-1ce4-4d52-981a-468f52fcaf45
	  Boot ID:                    9688b1e6-202b-4b8e-99ec-d05348e21a34
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-vk26j                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-zr7v6           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-wkvlt                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m55s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-n5dz8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-992121                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-gncrk                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-992121              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-992121     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-z2m4r                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-992121              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-tw4g8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rnnxg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-992121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-992121 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-992121 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-992121 event: Registered Node functional-992121 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-992121 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-992121 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-992121 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-992121 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-992121 event: Registered Node functional-992121 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 b6 ca 06 2a 05 08 06
	[  +6.035985] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e2 73 34 16 44 a1 08 06
	[Sep29 10:53] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +1.017535] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000031] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +1.023856] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +1.023945] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000016] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +1.023873] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +1.023900] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +2.047848] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +4.031627] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[  +8.191397] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[ +16.382717] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	[Sep29 10:54] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 2a 83 76 ef 8e 27 72 78 af fc f4 4a 08 00
	
	
	==> etcd [09b9224c8c9fe4cec1fd4eeeb2e57bb8801f7a515e8b45f0d5adbf30049da961] <==
	{"level":"warn","ts":"2025-09-29T10:58:43.695035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.702166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.708181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.714706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.721890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.727883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.733878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.740810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.746919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.754561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.760522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.766575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.784875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.793257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.799227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.805379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.811579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.817785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.837636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.843998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.850690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:58:43.899011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59252","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:08:43.420540Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1153}
	{"level":"info","ts":"2025-09-29T11:08:43.440161Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1153,"took":"19.25128ms","hash":4211308642,"current-db-size-bytes":3411968,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1527808,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-09-29T11:08:43.440215Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4211308642,"revision":1153,"compact-revision":-1}
	
	
	==> etcd [f8b669e93fbfc86478e49cbe945548f5454d837b5960d5d1d6096e8e8c21a39a] <==
	{"level":"warn","ts":"2025-09-29T10:57:03.269552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:03.277761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:03.284577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:03.297498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:03.304422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:03.310896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:57:03.358315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47310","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:58:22.370943Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T10:58:22.371056Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-992121","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T10:58:22.371151Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:58:29.372566Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:58:29.372699Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:58:29.372772Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-29T10:58:29.372806Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:58:29.372856Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:58:29.372868Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:58:29.372876Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T10:58:29.372767Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:58:29.372903Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:58:29.372916Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:58:29.372932Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T10:58:29.375075Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T10:58:29.375128Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:58:29.375149Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T10:58:29.375154Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-992121","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:09:35 up 51 min,  0 users,  load average: 0.21, 0.29, 0.58
	Linux functional-992121 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [2f738c958775dde8ee24d1ae5addfc143f878a952e4951b976367b9629b6f59b] <==
	I0929 11:07:33.534241       1 main.go:301] handling current node
	I0929 11:07:43.537351       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:07:43.537384       1 main.go:301] handling current node
	I0929 11:07:53.528968       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:07:53.529001       1 main.go:301] handling current node
	I0929 11:08:03.527991       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:08:03.528043       1 main.go:301] handling current node
	I0929 11:08:13.534110       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:08:13.534142       1 main.go:301] handling current node
	I0929 11:08:23.531030       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:08:23.531072       1 main.go:301] handling current node
	I0929 11:08:33.528439       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:08:33.528479       1 main.go:301] handling current node
	I0929 11:08:43.536582       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:08:43.536608       1 main.go:301] handling current node
	I0929 11:08:53.531923       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:08:53.531982       1 main.go:301] handling current node
	I0929 11:09:03.528649       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:09:03.528690       1 main.go:301] handling current node
	I0929 11:09:13.527792       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:09:13.527853       1 main.go:301] handling current node
	I0929 11:09:23.533071       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:09:23.533109       1 main.go:301] handling current node
	I0929 11:09:33.536615       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:09:33.536654       1 main.go:301] handling current node
	
	
	==> kindnet [95f552c1ec6f7897084c2d8c8ca537a616b4cd522fd64afc95996c99878fa9c6] <==
	I0929 10:57:12.462246       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 10:57:12.462554       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 10:57:12.462744       1 main.go:148] setting mtu 1500 for CNI 
	I0929 10:57:12.462764       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 10:57:12.462782       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T10:57:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 10:57:12.657233       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 10:57:12.657700       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 10:57:12.657769       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 10:57:12.658168       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0929 10:57:42.659109       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0929 10:57:42.659125       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0929 10:57:42.659139       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0929 10:57:42.659122       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0929 10:57:43.658701       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 10:57:43.658736       1 metrics.go:72] Registering metrics
	I0929 10:57:43.658791       1 controller.go:711] "Syncing nftables rules"
	I0929 10:57:52.657074       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:57:52.657133       1 main.go:301] handling current node
	I0929 10:58:02.664935       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:58:02.664965       1 main.go:301] handling current node
	I0929 10:58:12.661900       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:58:12.661940       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0f4f78c9f3b1402c0fe74c272a6381624ddbe2b31ba9601ef7a8f93ce28145c7] <==
	I0929 10:59:10.003797       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.253.185"}
	I0929 10:59:10.015771       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.245.45"}
	I0929 10:59:20.774875       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.131.202"}
	E0929 10:59:32.957860       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55682: use of closed network connection
	I0929 10:59:33.536799       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.212.34"}
	I0929 10:59:40.084964       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.108.92"}
	E0929 10:59:41.763881       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45616: use of closed network connection
	E0929 10:59:56.233589       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40336: use of closed network connection
	E0929 10:59:57.020394       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40354: use of closed network connection
	I0929 10:59:59.316790       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:00:10.132886       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:01:24.796711       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:01:36.825220       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:02:37.436524       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:02:49.258740       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:03:57.316521       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:04:15.972963       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:05:20.197559       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:05:32.443093       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:06:30.356883       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:06:54.666147       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:07:45.483495       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:08:24.279465       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:08:44.277705       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 11:08:55.006304       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [bb724c19e10e4c19046a230ebe0a72b618e38771c025847de1d5976b54df50d3] <==
	I0929 10:58:32.608556       1 controllermanager.go:781] "Started controller" controller="endpointslice-mirroring-controller"
	I0929 10:58:32.608686       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0929 10:58:32.608695       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint_slice_mirroring"
	I0929 10:58:32.654092       1 shared_informer.go:356] "Caches are synced" controller="token_cleaner"
	I0929 10:58:32.659753       1 controllermanager.go:781] "Started controller" controller="serviceaccount-controller"
	I0929 10:58:32.659800       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0929 10:58:32.659815       1 shared_informer.go:349] "Waiting for caches to sync" controller="service account"
	I0929 10:58:32.706058       1 shared_informer.go:356] "Caches are synced" controller="tokens"
	I0929 10:58:32.708973       1 controllermanager.go:781] "Started controller" controller="replicaset-controller"
	I0929 10:58:32.708994       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0929 10:58:32.709113       1 replica_set.go:243] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0929 10:58:32.709126       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicaSet"
	I0929 10:58:32.759444       1 controllermanager.go:781] "Started controller" controller="ephemeral-volume-controller"
	I0929 10:58:32.759467       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0929 10:58:32.759542       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0929 10:58:32.759565       1 shared_informer.go:349] "Waiting for caches to sync" controller="ephemeral"
	I0929 10:58:32.860183       1 controllermanager.go:781] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0929 10:58:32.860242       1 shared_informer.go:349] "Waiting for caches to sync" controller="validatingadmissionpolicy-status"
	I0929 10:58:32.908517       1 controllermanager.go:781] "Started controller" controller="pod-garbage-collector-controller"
	I0929 10:58:32.908541       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0929 10:58:32.908560       1 shared_informer.go:349] "Waiting for caches to sync" controller="GC"
	I0929 10:58:32.959040       1 controllermanager.go:781] "Started controller" controller="job-controller"
	I0929 10:58:32.959118       1 job_controller.go:257] "Starting job controller" logger="job-controller"
	I0929 10:58:32.959125       1 shared_informer.go:349] "Waiting for caches to sync" controller="job"
	F0929 10:58:33.106900       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/horizontal-pod-autoscaler": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [e0814a8a9485f01d398cd520cfa0855a9f2b06109eda28c94b6c46b87088f116] <==
	I0929 10:58:47.684446       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 10:58:47.684499       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 10:58:47.684522       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 10:58:47.684559       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 10:58:47.684574       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 10:58:47.684586       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 10:58:47.684612       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:58:47.684575       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 10:58:47.684710       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 10:58:47.687047       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 10:58:47.688603       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:58:47.689651       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:58:47.689665       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:58:47.689673       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 10:58:47.692283       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 10:58:47.693734       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 10:58:47.699577       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 10:58:47.702807       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 10:58:47.704121       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 10:59:09.933571       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:59:09.937478       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:59:09.943310       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:59:09.943320       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:59:09.948834       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:59:09.953589       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [85f428d1b9a1e17a1b3319ddba5876115fe9370600401bad76f73093a8a71f54] <==
	I0929 10:58:23.238615       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:58:23.339571       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:58:23.339609       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:58:23.339751       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:58:23.359622       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:58:23.359685       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:58:23.365248       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:58:23.365729       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:58:23.365772       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:58:23.367554       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:58:23.367655       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:58:23.367681       1 config.go:200] "Starting service config controller"
	I0929 10:58:23.367688       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:58:23.367690       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:58:23.367701       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:58:23.368257       1 config.go:309] "Starting node config controller"
	I0929 10:58:23.368366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:58:23.368409       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:58:23.468457       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:58:23.468543       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 10:58:23.468541       1 shared_informer.go:356] "Caches are synced" controller="service config"
	E0929 10:58:44.285021       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E0929 10:58:44.285022       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E0929 10:58:44.284991       1 reflector.go:205] "Failed to watch" err="nodes \"functional-992121\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:58:44.285084       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kube-proxy [f701efdb742f90b1defe39cc43807f90d65df162a5566d134c75bacc24649aa9] <==
	I0929 10:57:12.330523       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:57:12.406001       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:57:12.506305       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:57:12.506365       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:57:12.506467       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:57:12.529030       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:57:12.529086       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:57:12.534696       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:57:12.535161       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:57:12.535228       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:57:12.536713       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:57:12.536737       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:57:12.536783       1 config.go:200] "Starting service config controller"
	I0929 10:57:12.536790       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:57:12.537320       1 config.go:309] "Starting node config controller"
	I0929 10:57:12.537355       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:57:12.537364       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:57:12.537849       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:57:12.538329       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:57:12.636911       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 10:57:12.638201       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:57:12.638410       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4dbd347d5cd9a49742d5d5050c9262c003be1d1edea4829469aa1e9f3b9651be] <==
	E0929 10:57:03.769769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:57:03.769819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:57:03.769877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:57:03.769886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:57:03.769968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 10:57:03.769987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:57:03.770088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:57:03.770129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:57:03.770134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:57:04.659105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 10:57:04.667276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 10:57:04.693799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:57:04.763128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 10:57:04.768269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 10:57:04.820445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:57:04.929782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:57:04.959944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:57:05.154147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 10:57:07.667285       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:58:39.583859       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 10:58:39.583897       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:58:39.583945       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 10:58:39.583976       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 10:58:39.583981       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 10:58:39.583999       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ab58a776e66912ce8a99a5bf3d7d3201ea1c517621773fcdf7eaa4e7cc0d2ec1] <==
	I0929 10:58:42.743413       1 serving.go:386] Generated self-signed cert in-memory
	W0929 10:58:44.267194       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 10:58:44.267230       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 10:58:44.267242       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 10:58:44.267252       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 10:58:44.292683       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:58:44.292707       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:58:44.294405       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:58:44.294440       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:58:44.294639       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:58:44.294695       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:58:44.395376       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:08:28 functional-992121 kubelet[5412]: E0929 11:08:28.051595    5412 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-zr7v6" podUID="e6dced3d-0c5e-4b68-84a3-59861a33bc24"
	Sep 29 11:08:32 functional-992121 kubelet[5412]: E0929 11:08:32.164486    5412 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144112164299699  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:08:32 functional-992121 kubelet[5412]: E0929 11:08:32.164513    5412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144112164299699  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:08:38 functional-992121 kubelet[5412]: E0929 11:08:38.052042    5412 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-vk26j" podUID="83ba8425-589f-4e02-a880-6e0a1f7df1e7"
	Sep 29 11:08:40 functional-992121 kubelet[5412]: E0929 11:08:40.052070    5412 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-zr7v6" podUID="e6dced3d-0c5e-4b68-84a3-59861a33bc24"
	Sep 29 11:08:42 functional-992121 kubelet[5412]: E0929 11:08:42.165819    5412 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144122165645882  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:08:42 functional-992121 kubelet[5412]: E0929 11:08:42.165859    5412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144122165645882  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:08:52 functional-992121 kubelet[5412]: E0929 11:08:52.053125    5412 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-zr7v6" podUID="e6dced3d-0c5e-4b68-84a3-59861a33bc24"
	Sep 29 11:08:52 functional-992121 kubelet[5412]: E0929 11:08:52.167374    5412 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144132167176830  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:08:52 functional-992121 kubelet[5412]: E0929 11:08:52.167407    5412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144132167176830  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:08:53 functional-992121 kubelet[5412]: E0929 11:08:53.051332    5412 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-vk26j" podUID="83ba8425-589f-4e02-a880-6e0a1f7df1e7"
	Sep 29 11:09:02 functional-992121 kubelet[5412]: E0929 11:09:02.168917    5412 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144142168603300  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:09:02 functional-992121 kubelet[5412]: E0929 11:09:02.168956    5412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144142168603300  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:09:05 functional-992121 kubelet[5412]: E0929 11:09:05.051714    5412 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-vk26j" podUID="83ba8425-589f-4e02-a880-6e0a1f7df1e7"
	Sep 29 11:09:07 functional-992121 kubelet[5412]: E0929 11:09:07.052249    5412 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-zr7v6" podUID="e6dced3d-0c5e-4b68-84a3-59861a33bc24"
	Sep 29 11:09:12 functional-992121 kubelet[5412]: E0929 11:09:12.170676    5412 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144152170486846  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:09:12 functional-992121 kubelet[5412]: E0929 11:09:12.170705    5412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144152170486846  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:09:16 functional-992121 kubelet[5412]: E0929 11:09:16.051896    5412 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-vk26j" podUID="83ba8425-589f-4e02-a880-6e0a1f7df1e7"
	Sep 29 11:09:20 functional-992121 kubelet[5412]: E0929 11:09:20.052126    5412 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-zr7v6" podUID="e6dced3d-0c5e-4b68-84a3-59861a33bc24"
	Sep 29 11:09:22 functional-992121 kubelet[5412]: E0929 11:09:22.172256    5412 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144162172056670  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:09:22 functional-992121 kubelet[5412]: E0929 11:09:22.172285    5412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144162172056670  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:09:31 functional-992121 kubelet[5412]: E0929 11:09:31.051292    5412 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-vk26j" podUID="83ba8425-589f-4e02-a880-6e0a1f7df1e7"
	Sep 29 11:09:32 functional-992121 kubelet[5412]: E0929 11:09:32.173619    5412 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759144172173419766  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:09:32 functional-992121 kubelet[5412]: E0929 11:09:32.173651    5412 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759144172173419766  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303431}  inodes_used:{value:134}}"
	Sep 29 11:09:34 functional-992121 kubelet[5412]: E0929 11:09:34.051304    5412 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-zr7v6" podUID="e6dced3d-0c5e-4b68-84a3-59861a33bc24"
	
	
	==> kubernetes-dashboard [719114b2cacd861e811356f0f50eff96e58baa39ab87a766e5146bedb845e469] <==
	2025/09/29 10:59:21 Using namespace: kubernetes-dashboard
	2025/09/29 10:59:21 Using in-cluster config to connect to apiserver
	2025/09/29 10:59:21 Using secret token for csrf signing
	2025/09/29 10:59:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/29 10:59:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/29 10:59:21 Successful initial request to the apiserver, version: v1.34.0
	2025/09/29 10:59:21 Generating JWE encryption key
	2025/09/29 10:59:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/29 10:59:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/29 10:59:21 Initializing JWE encryption key from synchronized object
	2025/09/29 10:59:21 Creating in-cluster Sidecar client
	2025/09/29 10:59:21 Successful request to sidecar
	2025/09/29 10:59:21 Serving insecurely on HTTP port: 9090
	2025/09/29 10:59:21 Starting overwatch
	
	
	==> storage-provisioner [0795ed6bb4cfdca0d7b88630013737442cd1a1f4fcdc51dd928c0ed367087ffd] <==
	W0929 10:57:57.220465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:59.223194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:57:59.228385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:01.231429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:01.235420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:03.238810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:03.246503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:05.249391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:05.253334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:07.256188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:07.260332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:09.263428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:09.267368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:11.271119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:11.275425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:13.278439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:13.282581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:15.285258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:15.290819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:17.294519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:17.298090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:19.301514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:19.306132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:21.309873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:58:21.313664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [5971ed2af0b6d2376a877c18b10ef8a323cb99587d37a21d75e71ac2b9beac63] <==
	W0929 11:09:10.389627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:12.392501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:12.397813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:14.401320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:14.405341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:16.408760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:16.412566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:18.414957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:18.419712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:20.422321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:20.426126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:22.428981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:22.432678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:24.435898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:24.440504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:26.444288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:26.448003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:28.450941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:28.454638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:30.456981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:30.460759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:32.463765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:32.468760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:34.472835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:09:34.477347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-992121 -n functional-992121
helpers_test.go:269: (dbg) Run:  kubectl --context functional-992121 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-vk26j hello-node-connect-7d85dfc575-zr7v6
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-992121 describe pod busybox-mount hello-node-75c85bcc94-vk26j hello-node-connect-7d85dfc575-zr7v6
helpers_test.go:290: (dbg) kubectl --context functional-992121 describe pod busybox-mount hello-node-75c85bcc94-vk26j hello-node-connect-7d85dfc575-zr7v6:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-992121/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:59:08 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://2e18fd950876d82ed8c0a9ebdf0f6d79708589d8055e5986b0ab11da8a97ef6f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 10:59:12 +0000
	      Finished:     Mon, 29 Sep 2025 10:59:12 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hpzb5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-hpzb5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-992121
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.029s (3.029s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-vk26j
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-992121/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:59:06 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f5l5n (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f5l5n:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vk26j to functional-992121
	  Normal   Pulling    7m23s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m23s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m23s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    20s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     20s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-zr7v6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-992121/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:59:33 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pkxxq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pkxxq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zr7v6 to functional-992121
	  Normal   Pulling    7m6s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m6s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m6s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2s (x43 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-992121 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-992121 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-vk26j" [83ba8425-589f-4e02-a880-6e0a1f7df1e7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-992121 -n functional-992121
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-29 11:09:06.982580282 +0000 UTC m=+1186.963637853
functional_test.go:1460: (dbg) Run:  kubectl --context functional-992121 describe po hello-node-75c85bcc94-vk26j -n default
functional_test.go:1460: (dbg) kubectl --context functional-992121 describe po hello-node-75c85bcc94-vk26j -n default:
Name:             hello-node-75c85bcc94-vk26j
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-992121/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:59:06 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f5l5n (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-f5l5n:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-vk26j to functional-992121
Normal   Pulling    6m54s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m54s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m54s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-992121 logs hello-node-75c85bcc94-vk26j -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-992121 logs hello-node-75c85bcc94-vk26j -n default: exit status 1 (67.365288ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-vk26j" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-992121 logs hello-node-75c85bcc94-vk26j -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992121 service --namespace=default --https --url hello-node: exit status 115 (518.623439ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31820
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-992121 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992121 service hello-node --url --format={{.IP}}: exit status 115 (530.944742ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-992121 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992121 service hello-node --url: exit status 115 (519.773374ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31820
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-992121 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31820
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    

Test pass (299/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 13.38
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 12.53
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.22
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.15
21 TestBinaryMirror 0.82
22 TestOffline 90.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 154.65
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 9.49
35 TestAddons/parallel/Registry 15.63
36 TestAddons/parallel/RegistryCreds 0.66
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.66
41 TestAddons/parallel/CSI 56.08
42 TestAddons/parallel/Headlamp 18.47
43 TestAddons/parallel/CloudSpanner 5.54
44 TestAddons/parallel/LocalPath 55.6
45 TestAddons/parallel/NvidiaDevicePlugin 5.61
46 TestAddons/parallel/Yakd 11.74
47 TestAddons/parallel/AmdGpuDevicePlugin 5.62
48 TestAddons/StoppedEnableDisable 18.44
49 TestCertOptions 28.07
50 TestCertExpiration 213.57
52 TestForceSystemdFlag 31.38
53 TestForceSystemdEnv 39.46
55 TestKVMDriverInstallOrUpdate 1.04
59 TestErrorSpam/setup 19.95
60 TestErrorSpam/start 0.67
61 TestErrorSpam/status 0.94
62 TestErrorSpam/pause 1.48
63 TestErrorSpam/unpause 1.5
64 TestErrorSpam/stop 12.53
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 68.12
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.71
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 4.48
76 TestFunctional/serial/CacheCmd/cache/add_local 2.49
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.22
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.1
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 46.92
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.4
87 TestFunctional/serial/LogsFileCmd 1.44
88 TestFunctional/serial/InvalidService 3.96
90 TestFunctional/parallel/ConfigCmd 0.4
91 TestFunctional/parallel/DashboardCmd 15.89
92 TestFunctional/parallel/DryRun 0.47
93 TestFunctional/parallel/InternationalLanguage 0.19
94 TestFunctional/parallel/StatusCmd 1.1
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 31.36
102 TestFunctional/parallel/SSHCmd 0.56
103 TestFunctional/parallel/CpCmd 1.85
104 TestFunctional/parallel/MySQL 17.06
105 TestFunctional/parallel/FileSync 0.26
106 TestFunctional/parallel/CertSync 1.68
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.56
114 TestFunctional/parallel/License 0.89
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
117 TestFunctional/parallel/MountCmd/any-port 9.15
118 TestFunctional/parallel/ProfileCmd/profile_list 0.43
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
120 TestFunctional/parallel/Version/short 0.05
121 TestFunctional/parallel/Version/components 0.48
122 TestFunctional/parallel/MountCmd/specific-port 1.87
123 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 17.23
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
133 TestFunctional/parallel/ImageCommands/ImageBuild 6.81
134 TestFunctional/parallel/ImageCommands/Setup 1.78
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.23
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.21
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
151 TestFunctional/parallel/ServiceCmd/List 1.69
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 165.75
164 TestMultiControlPlane/serial/DeployApp 7.83
165 TestMultiControlPlane/serial/PingHostFromPods 1.1
166 TestMultiControlPlane/serial/AddWorkerNode 54.75
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
169 TestMultiControlPlane/serial/CopyFile 16.51
170 TestMultiControlPlane/serial/StopSecondaryNode 14.18
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
172 TestMultiControlPlane/serial/RestartSecondaryNode 9.13
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 114.28
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.37
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.7
177 TestMultiControlPlane/serial/StopCluster 43.56
178 TestMultiControlPlane/serial/RestartCluster 56.66
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
180 TestMultiControlPlane/serial/AddSecondaryNode 41.77
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.91
185 TestJSONOutput/start/Command 69.45
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.65
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.62
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.99
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
210 TestKicCustomNetwork/create_custom_network 35.06
211 TestKicCustomNetwork/use_default_bridge_network 24.74
212 TestKicExistingNetwork 22.33
213 TestKicCustomSubnet 24.75
214 TestKicStaticIP 26.22
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 46.79
219 TestMountStart/serial/StartWithMountFirst 6.24
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 6.55
222 TestMountStart/serial/VerifyMountSecond 0.25
223 TestMountStart/serial/DeleteFirst 1.66
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.19
226 TestMountStart/serial/RestartStopped 7.95
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 93.82
231 TestMultiNode/serial/DeployApp2Nodes 5.53
232 TestMultiNode/serial/PingHostFrom2Pods 0.78
233 TestMultiNode/serial/AddNode 23.88
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.67
236 TestMultiNode/serial/CopyFile 9.67
237 TestMultiNode/serial/StopNode 2.3
238 TestMultiNode/serial/StartAfterStop 7.25
239 TestMultiNode/serial/RestartKeepsNodes 83.67
240 TestMultiNode/serial/DeleteNode 5.27
241 TestMultiNode/serial/StopMultiNode 28.65
242 TestMultiNode/serial/RestartMultiNode 48.51
243 TestMultiNode/serial/ValidateNameConflict 24.14
248 TestPreload 123.67
250 TestScheduledStopUnix 98.49
253 TestInsufficientStorage 9.8
254 TestRunningBinaryUpgrade 50.11
256 TestKubernetesUpgrade 299.03
257 TestMissingContainerUpgrade 106.02
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
260 TestNoKubernetes/serial/StartWithK8s 37.76
261 TestNoKubernetes/serial/StartWithStopK8s 24.18
262 TestNoKubernetes/serial/Start 6.06
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
264 TestNoKubernetes/serial/ProfileList 1.47
265 TestNoKubernetes/serial/Stop 1.22
266 TestNoKubernetes/serial/StartNoArgs 8.67
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
268 TestStoppedBinaryUpgrade/Setup 2.61
269 TestStoppedBinaryUpgrade/Upgrade 46.15
278 TestPause/serial/Start 44.69
279 TestStoppedBinaryUpgrade/MinikubeLogs 1.06
287 TestNetworkPlugins/group/false 3.41
291 TestPause/serial/SecondStartNoReconfiguration 7
292 TestPause/serial/Pause 0.71
293 TestPause/serial/VerifyStatus 0.37
294 TestPause/serial/Unpause 0.74
295 TestPause/serial/PauseAgain 0.84
296 TestPause/serial/DeletePaused 2.91
297 TestPause/serial/VerifyDeletedResources 0.81
299 TestStartStop/group/old-k8s-version/serial/FirstStart 51.28
301 TestStartStop/group/no-preload/serial/FirstStart 54.03
302 TestStartStop/group/old-k8s-version/serial/DeployApp 11.29
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
304 TestStartStop/group/old-k8s-version/serial/Stop 16.16
305 TestStartStop/group/no-preload/serial/DeployApp 9.26
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
308 TestStartStop/group/old-k8s-version/serial/SecondStart 52.43
309 TestStartStop/group/no-preload/serial/Stop 16.47
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
311 TestStartStop/group/no-preload/serial/SecondStart 45.42
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
316 TestStartStop/group/old-k8s-version/serial/Pause 2.9
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
319 TestStartStop/group/embed-certs/serial/FirstStart 72.07
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.42
321 TestStartStop/group/no-preload/serial/Pause 3.63
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.19
325 TestStartStop/group/newest-cni/serial/FirstStart 30.66
326 TestNetworkPlugins/group/auto/Start 40.9
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.8
329 TestStartStop/group/newest-cni/serial/Stop 7.95
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
331 TestStartStop/group/newest-cni/serial/SecondStart 11.13
332 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.22
338 TestStartStop/group/newest-cni/serial/Pause 2.66
339 TestNetworkPlugins/group/kindnet/Start 43.09
340 TestNetworkPlugins/group/auto/KubeletFlags 0.3
341 TestNetworkPlugins/group/auto/NetCatPod 11.22
342 TestStartStop/group/embed-certs/serial/DeployApp 10.34
343 TestNetworkPlugins/group/auto/DNS 0.17
344 TestNetworkPlugins/group/auto/Localhost 0.13
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
346 TestNetworkPlugins/group/auto/HairPin 0.13
347 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.36
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
349 TestStartStop/group/embed-certs/serial/Stop 16.39
350 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
351 TestStartStop/group/embed-certs/serial/SecondStart 52.73
352 TestNetworkPlugins/group/calico/Start 50.53
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
355 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
356 TestNetworkPlugins/group/kindnet/DNS 0.14
357 TestNetworkPlugins/group/kindnet/Localhost 0.11
358 TestNetworkPlugins/group/kindnet/HairPin 0.12
359 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
361 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
362 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.21
363 TestNetworkPlugins/group/custom-flannel/Start 56.55
364 TestNetworkPlugins/group/enable-default-cni/Start 73.6
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
368 TestNetworkPlugins/group/calico/KubeletFlags 0.29
369 TestNetworkPlugins/group/calico/NetCatPod 9.22
370 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
371 TestStartStop/group/embed-certs/serial/Pause 2.94
372 TestNetworkPlugins/group/calico/DNS 0.2
373 TestNetworkPlugins/group/calico/Localhost 0.16
374 TestNetworkPlugins/group/calico/HairPin 0.16
375 TestNetworkPlugins/group/flannel/Start 56.11
376 TestNetworkPlugins/group/bridge/Start 68.24
377 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
378 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.21
379 TestNetworkPlugins/group/custom-flannel/DNS 0.14
380 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
381 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.2
384 TestNetworkPlugins/group/flannel/ControllerPod 6.01
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
386 TestNetworkPlugins/group/flannel/NetCatPod 9.2
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
390 TestNetworkPlugins/group/flannel/DNS 0.14
391 TestNetworkPlugins/group/flannel/Localhost 0.12
392 TestNetworkPlugins/group/flannel/HairPin 0.12
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
394 TestNetworkPlugins/group/bridge/NetCatPod 9.19
395 TestNetworkPlugins/group/bridge/DNS 0.13
396 TestNetworkPlugins/group/bridge/Localhost 0.11
397 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (13.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-234913 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-234913 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.377052603s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 10:49:33.434769  132495 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0929 10:49:33.434895  132495 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-128977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-234913
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-234913: exit status 85 (63.525985ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-234913 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-234913 │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:49:20
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:49:20.100243  132507 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:49:20.100541  132507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:49:20.100552  132507 out.go:374] Setting ErrFile to fd 2...
	I0929 10:49:20.100556  132507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:49:20.100784  132507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
	W0929 10:49:20.100933  132507 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21656-128977/.minikube/config/config.json: open /home/jenkins/minikube-integration/21656-128977/.minikube/config/config.json: no such file or directory
	I0929 10:49:20.101433  132507 out.go:368] Setting JSON to true
	I0929 10:49:20.102409  132507 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1898,"bootTime":1759141062,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:49:20.102520  132507 start.go:140] virtualization: kvm guest
	I0929 10:49:20.104946  132507 out.go:99] [download-only-234913] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0929 10:49:20.105102  132507 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21656-128977/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 10:49:20.105148  132507 notify.go:220] Checking for updates...
	I0929 10:49:20.106400  132507 out.go:171] MINIKUBE_LOCATION=21656
	I0929 10:49:20.107730  132507 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:49:20.109027  132507 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig
	I0929 10:49:20.110326  132507 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube
	I0929 10:49:20.111529  132507 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 10:49:20.113562  132507 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 10:49:20.113800  132507 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:49:20.136842  132507 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:49:20.136956  132507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:49:20.193879  132507 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 10:49:20.183605107 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:49:20.194000  132507 docker.go:318] overlay module found
	I0929 10:49:20.195624  132507 out.go:99] Using the docker driver based on user configuration
	I0929 10:49:20.195655  132507 start.go:304] selected driver: docker
	I0929 10:49:20.195661  132507 start.go:924] validating driver "docker" against <nil>
	I0929 10:49:20.195765  132507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:49:20.251330  132507 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 10:49:20.242024124 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:49:20.251499  132507 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:49:20.251992  132507 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 10:49:20.252130  132507 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 10:49:20.253974  132507 out.go:171] Using Docker driver with root privileges
	I0929 10:49:20.255190  132507 cni.go:84] Creating CNI manager for ""
	I0929 10:49:20.255232  132507 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:49:20.255252  132507 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 10:49:20.255333  132507 start.go:348] cluster config:
	{Name:download-only-234913 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-234913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:49:20.256567  132507 out.go:99] Starting "download-only-234913" primary control-plane node in "download-only-234913" cluster
	I0929 10:49:20.256586  132507 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 10:49:20.257732  132507 out.go:99] Pulling base image v0.0.48 ...
	I0929 10:49:20.257760  132507 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 10:49:20.257876  132507 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:49:20.274878  132507 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:49:20.275168  132507 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:49:20.275296  132507 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:49:20.724049  132507 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0929 10:49:20.724105  132507 cache.go:58] Caching tarball of preloaded images
	I0929 10:49:20.724296  132507 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 10:49:20.726286  132507 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0929 10:49:20.726318  132507 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 10:49:20.821881  132507 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21656-128977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0929 10:49:24.780019  132507 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	
	
	* The control-plane node download-only-234913 host does not exist
	  To start a cluster, run: "minikube start -p download-only-234913"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-234913
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (12.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-349639 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-349639 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.533436107s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (12.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 10:49:46.383237  132495 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0929 10:49:46.383291  132495 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21656-128977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-349639
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-349639: exit status 85 (63.233719ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-234913 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-234913 │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:49 UTC │
	│ delete  │ -p download-only-234913                                                                                                                                                   │ download-only-234913 │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │ 29 Sep 25 10:49 UTC │
	│ start   │ -o=json --download-only -p download-only-349639 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-349639 │ jenkins │ v1.37.0 │ 29 Sep 25 10:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:49:33
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:49:33.890995  132866 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:49:33.891243  132866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:49:33.891253  132866 out.go:374] Setting ErrFile to fd 2...
	I0929 10:49:33.891258  132866 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:49:33.891458  132866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
	I0929 10:49:33.891964  132866 out.go:368] Setting JSON to true
	I0929 10:49:33.892819  132866 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":1912,"bootTime":1759141062,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:49:33.892916  132866 start.go:140] virtualization: kvm guest
	I0929 10:49:33.894659  132866 out.go:99] [download-only-349639] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:49:33.894835  132866 notify.go:220] Checking for updates...
	I0929 10:49:33.895962  132866 out.go:171] MINIKUBE_LOCATION=21656
	I0929 10:49:33.897320  132866 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:49:33.898482  132866 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig
	I0929 10:49:33.899649  132866 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube
	I0929 10:49:33.901010  132866 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 10:49:33.903241  132866 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 10:49:33.903505  132866 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:49:33.927687  132866 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:49:33.927773  132866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:49:33.986086  132866 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:49:33.97427921 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:49:33.986201  132866 docker.go:318] overlay module found
	I0929 10:49:33.987785  132866 out.go:99] Using the docker driver based on user configuration
	I0929 10:49:33.987815  132866 start.go:304] selected driver: docker
	I0929 10:49:33.987835  132866 start.go:924] validating driver "docker" against <nil>
	I0929 10:49:33.987931  132866 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:49:34.042046  132866 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:49:34.03241949 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:49:34.042271  132866 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:49:34.042747  132866 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 10:49:34.042952  132866 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 10:49:34.044732  132866 out.go:171] Using Docker driver with root privileges
	I0929 10:49:34.045967  132866 cni.go:84] Creating CNI manager for ""
	I0929 10:49:34.046044  132866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:49:34.046063  132866 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 10:49:34.046154  132866 start.go:348] cluster config:
	{Name:download-only-349639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-349639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:49:34.048982  132866 out.go:99] Starting "download-only-349639" primary control-plane node in "download-only-349639" cluster
	I0929 10:49:34.049006  132866 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 10:49:34.050074  132866 out.go:99] Pulling base image v0.0.48 ...
	I0929 10:49:34.050101  132866 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:49:34.050124  132866 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:49:34.066699  132866 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:49:34.066820  132866 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:49:34.066861  132866 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 10:49:34.066873  132866 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 10:49:34.066887  132866 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 10:49:34.865766  132866 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 10:49:34.865839  132866 cache.go:58] Caching tarball of preloaded images
	I0929 10:49:34.866042  132866 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:49:34.870120  132866 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0929 10:49:34.870156  132866 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 10:49:34.968851  132866 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21656-128977/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-349639 host does not exist
	  To start a cluster, run: "minikube start -p download-only-349639"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-349639
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.15s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-977032 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-977032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-977032
--- PASS: TestDownloadOnlyKic (1.15s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 10:49:48.222529  132495 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-048358 --alsologtostderr --binary-mirror http://127.0.0.1:35065 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-048358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-048358
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (90.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-632695 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-632695 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m26.801900084s)
helpers_test.go:175: Cleaning up "offline-crio-632695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-632695
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-632695: (3.503044473s)
--- PASS: TestOffline (90.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-721094
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-721094: exit status 85 (55.002163ms)

                                                
                                                
-- stdout --
	* Profile "addons-721094" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-721094"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-721094
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-721094: exit status 85 (54.507394ms)

                                                
                                                
-- stdout --
	* Profile "addons-721094" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-721094"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (154.65s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-721094 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-721094 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m34.648400109s)
--- PASS: TestAddons/Setup (154.65s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-721094 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-721094 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-721094 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-721094 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [599fb817-41df-47eb-9dc2-7c62648d5ced] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [599fb817-41df-47eb-9dc2-7c62648d5ced] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004185359s
addons_test.go:694: (dbg) Run:  kubectl --context addons-721094 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-721094 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-721094 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.463305ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-rlbb5" [583b7d01-6d10-4a27-bc85-640cbbe0d7a8] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002558861s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-9pbvq" [123da109-669b-43b6-8733-9d3cc0ef882d] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003083526s
addons_test.go:392: (dbg) Run:  kubectl --context addons-721094 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-721094 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-721094 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.830742715s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.63s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.009341ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-721094
addons_test.go:332: (dbg) Run:  kubectl --context addons-721094 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-jmscx" [a73d9e62-3d97-41a2-b9a8-ee3178024104] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002841264s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
I0929 10:52:47.589384  132495 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 10:52:47.589414  132495 kapi.go:107] duration metric: took 6.287255ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:455: metrics-server stabilized in 7.618214ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-9lqfj" [dd50327c-15f5-41a0-9172-bbbb6eae1d89] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003966414s
addons_test.go:463: (dbg) Run:  kubectl --context addons-721094 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.66s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 10:52:47.583153  132495 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.300099ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-721094 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-721094 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [5ab15851-17a6-4bc7-b1d5-e7edc2e76085] Pending
2025/09/29 10:52:57 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:352: "task-pv-pod" [5ab15851-17a6-4bc7-b1d5-e7edc2e76085] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [5ab15851-17a6-4bc7-b1d5-e7edc2e76085] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.002856082s
addons_test.go:572: (dbg) Run:  kubectl --context addons-721094 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-721094 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-721094 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-721094 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-721094 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-721094 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-721094 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [8b66f8cc-426a-4473-9926-ef5a72b444cb] Pending
helpers_test.go:352: "task-pv-pod-restore" [8b66f8cc-426a-4473-9926-ef5a72b444cb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [8b66f8cc-426a-4473-9926-ef5a72b444cb] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003499906s
addons_test.go:614: (dbg) Run:  kubectl --context addons-721094 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-721094 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-721094 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-721094 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.550322091s)
--- PASS: TestAddons/parallel/CSI (56.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-721094 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-m655d" [10ba6004-dfbf-4ce3-96eb-fc4fbb60ce55] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-m655d" [10ba6004-dfbf-4ce3-96eb-fc4fbb60ce55] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003784073s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-721094 addons disable headlamp --alsologtostderr -v=1: (5.705162795s)
--- PASS: TestAddons/parallel/Headlamp (18.47s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-n9vz4" [6403d8b1-884f-40c6-9757-bca8e04ec7e2] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00315575s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-721094 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-721094 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-721094 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [e5286a14-525f-4494-a727-c0fe954ae570] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [e5286a14-525f-4494-a727-c0fe954ae570] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [e5286a14-525f-4494-a727-c0fe954ae570] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.002752751s
addons_test.go:967: (dbg) Run:  kubectl --context addons-721094 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 ssh "cat /opt/local-path-provisioner/pvc-11c6a651-5517-404a-8a85-c61e4ebf2afe_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-721094 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-721094 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-721094 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.726514746s)
--- PASS: TestAddons/parallel/LocalPath (55.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-4b4ln" [e0b7d031-c996-4cc7-ad5b-1e9b0536acb3] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003238539s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-sngp5" [f3c1a468-4b3f-448b-a88d-3f067c6df441] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0039294s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-721094 addons disable yakd --alsologtostderr -v=1: (5.733316981s)
--- PASS: TestAddons/parallel/Yakd (11.74s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-h8thq" [39eb999d-7a68-4e10-a475-ec78d5b61aa3] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004363055s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.62s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-721094
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-721094: (18.181769186s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-721094
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-721094
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-721094
--- PASS: TestAddons/StoppedEnableDisable (18.44s)

                                                
                                    
x
+
TestCertOptions (28.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-633211 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-633211 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.951278068s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-633211 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-633211 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-633211 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-633211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-633211
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-633211: (2.488607255s)
--- PASS: TestCertOptions (28.07s)

                                                
                                    
x
+
TestCertExpiration (213.57s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-478778 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-478778 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.156738063s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-478778 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-478778 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.678202231s)
helpers_test.go:175: Cleaning up "cert-expiration-478778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-478778
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-478778: (2.738642794s)
--- PASS: TestCertExpiration (213.57s)

                                                
                                    
x
+
TestForceSystemdFlag (31.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-453384 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-453384 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.71810176s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-453384 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-453384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-453384
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-453384: (2.377663338s)
--- PASS: TestForceSystemdFlag (31.38s)

                                                
                                    
x
+
TestForceSystemdEnv (39.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-714066 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-714066 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.042942167s)
helpers_test.go:175: Cleaning up "force-systemd-env-714066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-714066
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-714066: (2.420830736s)
--- PASS: TestForceSystemdEnv (39.46s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.04s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0929 11:35:30.712336  132495 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0929 11:35:30.712463  132495 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate807595857/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 11:35:30.744474  132495 install.go:163] /tmp/TestKVMDriverInstallOrUpdate807595857/001/docker-machine-driver-kvm2 version is 1.1.1
W0929 11:35:30.744505  132495 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0929 11:35:30.744628  132495 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0929 11:35:30.744718  132495 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate807595857/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.04s)

                                                
                                    
x
+
TestErrorSpam/setup (19.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-245486 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-245486 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-245486 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-245486 --driver=docker  --container-runtime=crio: (19.952781343s)
--- PASS: TestErrorSpam/setup (19.95s)

                                                
                                    
x
+
TestErrorSpam/start (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 start --dry-run
--- PASS: TestErrorSpam/start (0.67s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 pause
--- PASS: TestErrorSpam/pause (1.48s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (12.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 stop: (12.34029205s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-245486 --log_dir /tmp/nospam-245486 stop
--- PASS: TestErrorSpam/stop (12.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21656-128977/.minikube/files/etc/test/nested/copy/132495/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992121 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0929 10:57:24.362371  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:24.368791  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:24.380200  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:24.401609  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:24.443057  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:24.524558  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:24.686032  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:25.007807  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:25.649865  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:26.931260  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:29.494153  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:34.615939  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:57:44.857609  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-992121 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m8.115420281s)
--- PASS: TestFunctional/serial/StartWithProxy (68.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.71s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 10:57:55.959979  132495 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992121 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-992121 --alsologtostderr -v=8: (6.708386784s)
functional_test.go:678: soft start took 6.709208325s for "functional-992121" cluster.
I0929 10:58:02.668801  132495 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (6.71s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-992121 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-992121 cache add registry.k8s.io/pause:3.1: (1.459483874s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 cache add registry.k8s.io/pause:3.3
E0929 10:58:05.339356  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-992121 cache add registry.k8s.io/pause:3.3: (1.505138476s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-992121 cache add registry.k8s.io/pause:latest: (1.514016381s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-992121 /tmp/TestFunctionalserialCacheCmdcacheadd_local3182401251/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 cache add minikube-local-cache-test:functional-992121
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-992121 cache add minikube-local-cache-test:functional-992121: (2.158574604s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 cache delete minikube-local-cache-test:functional-992121
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-992121
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992121 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.915401ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-992121 cache reload: (1.349725223s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 kubectl -- --context functional-992121 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-992121 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.92s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992121 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0929 10:58:46.301090  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-992121 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.915273205s)
functional_test.go:776: restart took 46.91541039s for "functional-992121" cluster.
I0929 10:58:59.591102  132495 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (46.92s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-992121 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-992121 logs: (1.402307867s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 logs --file /tmp/TestFunctionalserialLogsFileCmd2800265353/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-992121 logs --file /tmp/TestFunctionalserialLogsFileCmd2800265353/001/logs.txt: (1.43391962s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-992121 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-992121
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-992121: exit status 115 (342.383754ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31253 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-992121 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992121 config get cpus: exit status 14 (91.805143ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992121 config get cpus: exit status 14 (63.833357ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-992121 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-992121 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 170587: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-992121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (202.595587ms)

                                                
                                                
-- stdout --
	* [functional-992121] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:59:08.552723  169786 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:59:08.553071  169786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:59:08.553084  169786 out.go:374] Setting ErrFile to fd 2...
	I0929 10:59:08.553092  169786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:59:08.553332  169786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
	I0929 10:59:08.553988  169786 out.go:368] Setting JSON to false
	I0929 10:59:08.554986  169786 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2487,"bootTime":1759141062,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:59:08.555081  169786 start.go:140] virtualization: kvm guest
	I0929 10:59:08.557010  169786 out.go:179] * [functional-992121] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:59:08.558778  169786 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:59:08.558819  169786 notify.go:220] Checking for updates...
	I0929 10:59:08.561764  169786 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:59:08.563049  169786 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig
	I0929 10:59:08.564168  169786 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube
	I0929 10:59:08.565344  169786 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:59:08.566407  169786 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:59:08.570070  169786 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:59:08.570749  169786 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:59:08.600335  169786 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:59:08.600440  169786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:59:08.676360  169786 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:59:08.662334786 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:59:08.676521  169786 docker.go:318] overlay module found
	I0929 10:59:08.679969  169786 out.go:179] * Using the docker driver based on existing profile
	I0929 10:59:08.681460  169786 start.go:304] selected driver: docker
	I0929 10:59:08.681477  169786 start.go:924] validating driver "docker" against &{Name:functional-992121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-992121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:59:08.681571  169786 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:59:08.683179  169786 out.go:203] 
	W0929 10:59:08.684741  169786 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 10:59:08.685962  169786 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992121 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-992121 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (187.665643ms)

                                                
                                                
-- stdout --
	* [functional-992121] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:59:08.353734  169637 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:59:08.353865  169637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:59:08.353879  169637 out.go:374] Setting ErrFile to fd 2...
	I0929 10:59:08.353885  169637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:59:08.354328  169637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
	I0929 10:59:08.354966  169637 out.go:368] Setting JSON to false
	I0929 10:59:08.356162  169637 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2486,"bootTime":1759141062,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:59:08.356263  169637 start.go:140] virtualization: kvm guest
	I0929 10:59:08.358263  169637 out.go:179] * [functional-992121] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 10:59:08.359645  169637 notify.go:220] Checking for updates...
	I0929 10:59:08.359677  169637 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 10:59:08.360937  169637 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:59:08.362103  169637 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig
	I0929 10:59:08.363237  169637 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube
	I0929 10:59:08.364267  169637 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:59:08.365332  169637 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:59:08.367227  169637 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:59:08.368028  169637 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:59:08.394320  169637 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:59:08.394421  169637 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:59:08.469378  169637 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:59:08.457158334 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:59:08.469507  169637 docker.go:318] overlay module found
	I0929 10:59:08.473819  169637 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 10:59:08.475126  169637 start.go:304] selected driver: docker
	I0929 10:59:08.475145  169637 start.go:924] validating driver "docker" against &{Name:functional-992121 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-992121 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:59:08.475250  169637 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:59:08.477529  169637 out.go:203] 
	W0929 10:59:08.478716  169637 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 10:59:08.479966  169637 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [914b2f8a-60a7-472b-a0be-135c45d621ea] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004820861s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-992121 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-992121 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-992121 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-992121 apply -f testdata/storage-provisioner/pod.yaml
I0929 10:59:15.844192  132495 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e73e4b54-16b7-4621-8288-4eab1d145c0b] Pending
helpers_test.go:352: "sp-pod" [e73e4b54-16b7-4621-8288-4eab1d145c0b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e73e4b54-16b7-4621-8288-4eab1d145c0b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.003633084s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-992121 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-992121 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-992121 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [97794b54-73bb-4f21-a422-418b17bebb16] Pending
helpers_test.go:352: "sp-pod" [97794b54-73bb-4f21-a422-418b17bebb16] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [97794b54-73bb-4f21-a422-418b17bebb16] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002987089s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-992121 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.36s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh -n functional-992121 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 cp functional-992121:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1653102119/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh -n functional-992121 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh -n functional-992121 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-992121 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-wkvlt" [d6995faa-db5e-439c-8125-c512a2f162b8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-wkvlt" [d6995faa-db5e-439c-8125-c512a2f162b8] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.004181991s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-992121 exec mysql-5bb876957f-wkvlt -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-992121 exec mysql-5bb876957f-wkvlt -- mysql -ppassword -e "show databases;": exit status 1 (110.278874ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0929 10:59:56.236459  132495 retry.go:31] will retry after 678.838289ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-992121 exec mysql-5bb876957f-wkvlt -- mysql -ppassword -e "show databases;"
E0929 11:00:08.223118  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:02:24.356785  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:02:52.065035  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:07:24.357197  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (17.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/132495/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "sudo cat /etc/test/nested/copy/132495/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/132495.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "sudo cat /etc/ssl/certs/132495.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/132495.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "sudo cat /usr/share/ca-certificates/132495.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1324952.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "sudo cat /etc/ssl/certs/1324952.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1324952.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "sudo cat /usr/share/ca-certificates/1324952.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-992121 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992121 ssh "sudo systemctl is-active docker": exit status 1 (270.691786ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992121 ssh "sudo systemctl is-active containerd": exit status 1 (292.693227ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992121 /tmp/TestFunctionalparallelMountCmdany-port3062046797/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759143546859175586" to /tmp/TestFunctionalparallelMountCmdany-port3062046797/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759143546859175586" to /tmp/TestFunctionalparallelMountCmdany-port3062046797/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759143546859175586" to /tmp/TestFunctionalparallelMountCmdany-port3062046797/001/test-1759143546859175586
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992121 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (323.45415ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:59:07.182978  132495 retry.go:31] will retry after 625.040799ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 10:59 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 10:59 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 10:59 test-1759143546859175586
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh cat /mount-9p/test-1759143546859175586
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-992121 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [7fec8bd9-ea8d-47c9-9f3c-38298f4dc50f] Pending
helpers_test.go:352: "busybox-mount" [7fec8bd9-ea8d-47c9-9f3c-38298f4dc50f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [7fec8bd9-ea8d-47c9-9f3c-38298f4dc50f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [7fec8bd9-ea8d-47c9-9f3c-38298f4dc50f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003995979s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-992121 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992121 /tmp/TestFunctionalparallelMountCmdany-port3062046797/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "378.300791ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "50.901764ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "356.618214ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "62.907068ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992121 /tmp/TestFunctionalparallelMountCmdspecific-port1174335830/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992121 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.247483ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:59:16.293393  132495 retry.go:31] will retry after 587.461308ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992121 /tmp/TestFunctionalparallelMountCmdspecific-port1174335830/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992121 ssh "sudo umount -f /mount-9p": exit status 1 (262.322048ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-992121 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992121 /tmp/TestFunctionalparallelMountCmdspecific-port1174335830/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1364454309/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1364454309/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1364454309/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992121 ssh "findmnt -T" /mount1: exit status 1 (322.504669ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:59:18.200229  132495 retry.go:31] will retry after 380.748051ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-992121 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1364454309/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1364454309/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992121 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1364454309/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-992121 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-992121 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-992121 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-992121 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 172762: os: process already finished
helpers_test.go:525: unable to kill pid 172585: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-992121 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-992121 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [2db347bc-8308-4335-89a9-0c14fc3fe9a3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
2025/09/29 10:59:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "nginx-svc" [2db347bc-8308-4335-89a9-0c14fc3fe9a3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 17.003806576s
I0929 10:59:37.785987  132495 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992121 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-992121
localhost/kicbase/echo-server:functional-992121
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992121 image ls --format short --alsologtostderr:
I0929 10:59:42.343145  175642 out.go:360] Setting OutFile to fd 1 ...
I0929 10:59:42.343406  175642 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:59:42.343415  175642 out.go:374] Setting ErrFile to fd 2...
I0929 10:59:42.343419  175642 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:59:42.343607  175642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
I0929 10:59:42.344217  175642 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:59:42.344309  175642 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:59:42.344670  175642 cli_runner.go:164] Run: docker container inspect functional-992121 --format={{.State.Status}}
I0929 10:59:42.363633  175642 ssh_runner.go:195] Run: systemctl --version
I0929 10:59:42.363696  175642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992121
I0929 10:59:42.381408  175642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/functional-992121/id_rsa Username:docker}
I0929 10:59:42.473765  175642 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992121 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 41f689c209100 │ 197MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ localhost/kicbase/echo-server           │ functional-992121  │ 9056ab77afb8e │ 4.94MB │
│ localhost/my-image                      │ functional-992121  │ 6ed84d5ee9777 │ 1.47MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ alpine             │ 4a86014ec6994 │ 53.9MB │
│ localhost/minikube-local-cache-test     │ functional-992121  │ 66d1126bbc6ae │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992121 image ls --format table --alsologtostderr:
I0929 10:59:49.915718  176492 out.go:360] Setting OutFile to fd 1 ...
I0929 10:59:49.915840  176492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:59:49.915851  176492 out.go:374] Setting ErrFile to fd 2...
I0929 10:59:49.915858  176492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:59:49.916057  176492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
I0929 10:59:49.916653  176492 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:59:49.916752  176492 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:59:49.917138  176492 cli_runner.go:164] Run: docker container inspect functional-992121 --format={{.State.Status}}
I0929 10:59:49.935784  176492 ssh_runner.go:195] Run: systemctl --version
I0929 10:59:49.935854  176492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992121
I0929 10:59:49.954776  176492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/functional-992121/id_rsa Username:docker}
I0929 10:59:50.046862  176492 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992121 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53949946"},{"id":"41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81","repoDi
gests":["docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285","docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e"],"repoTags":["docker.io/library/nginx:latest"],"size":"196550530"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"a0af72f2ec6d628152b015a46d4074d
f8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":
[],"size":"43824855"},{"id":"8ff6e5d50a9b6eb2646fbc4a6c38911f8eab6d95599df73a904111d6c57ff63d","repoDigests":["docker.io/library/5fbfe8f10f96a7662c6578822404701cd6805f833b351c48662a0f17c4ec1b12-tmp@sha256:60bd43381140d0aa66c9ca62c87123cb0c3277c1c18319528c5f374636976f0f"],"repoTags":[],"size":"1465611"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-992121"],"size":"4943877"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","
repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-p
rovisioner:v5"],"size":"31470524"},{"id":"66d1126bbc6ae84fed1f84c62aa016856f24642a151d7a93f123381ceaa3d3e2","repoDigests":["localhost/minikube-local-cache-test@sha256:f149ebb5543dbeb01120a182fe7ee75108d583caf33ce043e07149e4bc13881e"],"repoTags":["localhost/minikube-local-cache-test:functional-992121"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[
"registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6ed84d5ee9777a4e5f4fb0d180353ece30f73b9486d9df76bec647cc07b96a58","repoDigests":["localhost/my-image@sha256:867871902b0ad600dd7130c474e670c38b5a18ff06
5d73d3f2279d261f52d2b7"],"repoTags":["localhost/my-image:functional-992121"],"size":"1468193"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992121 image ls --format json --alsologtostderr:
I0929 10:59:49.597149  176431 out.go:360] Setting OutFile to fd 1 ...
I0929 10:59:49.597435  176431 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:59:49.597446  176431 out.go:374] Setting ErrFile to fd 2...
I0929 10:59:49.597450  176431 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:59:49.597677  176431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
I0929 10:59:49.598319  176431 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:59:49.598411  176431 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:59:49.598782  176431 cli_runner.go:164] Run: docker container inspect functional-992121 --format={{.State.Status}}
I0929 10:59:49.619836  176431 ssh_runner.go:195] Run: systemctl --version
I0929 10:59:49.619890  176431 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992121
I0929 10:59:49.638728  176431 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/functional-992121/id_rsa Username:docker}
I0929 10:59:49.733669  176431 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992121 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 66d1126bbc6ae84fed1f84c62aa016856f24642a151d7a93f123381ceaa3d3e2
repoDigests:
- localhost/minikube-local-cache-test@sha256:f149ebb5543dbeb01120a182fe7ee75108d583caf33ce043e07149e4bc13881e
repoTags:
- localhost/minikube-local-cache-test:functional-992121
size: "3330"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a
repoTags:
- docker.io/library/nginx:alpine
size: "53949946"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-992121
size: "4943877"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81
repoDigests:
- docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285
- docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e
repoTags:
- docker.io/library/nginx:latest
size: "196550530"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992121 image ls --format yaml --alsologtostderr:
I0929 10:59:42.564616  175692 out.go:360] Setting OutFile to fd 1 ...
I0929 10:59:42.564928  175692 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:59:42.564940  175692 out.go:374] Setting ErrFile to fd 2...
I0929 10:59:42.564946  175692 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:59:42.565176  175692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
I0929 10:59:42.565791  175692 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:59:42.565930  175692 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:59:42.566337  175692 cli_runner.go:164] Run: docker container inspect functional-992121 --format={{.State.Status}}
I0929 10:59:42.584740  175692 ssh_runner.go:195] Run: systemctl --version
I0929 10:59:42.584789  175692 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992121
I0929 10:59:42.602427  175692 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/functional-992121/id_rsa Username:docker}
I0929 10:59:42.694724  175692 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992121 ssh pgrep buildkitd: exit status 1 (255.83735ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image build -t localhost/my-image:functional-992121 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-992121 image build -t localhost/my-image:functional-992121 testdata/build --alsologtostderr: (6.3295796s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992121 image build -t localhost/my-image:functional-992121 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8ff6e5d50a9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-992121
--> 6ed84d5ee97
Successfully tagged localhost/my-image:functional-992121
6ed84d5ee9777a4e5f4fb0d180353ece30f73b9486d9df76bec647cc07b96a58
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992121 image build -t localhost/my-image:functional-992121 testdata/build --alsologtostderr:
I0929 10:59:43.038337  175842 out.go:360] Setting OutFile to fd 1 ...
I0929 10:59:43.038635  175842 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:59:43.038646  175842 out.go:374] Setting ErrFile to fd 2...
I0929 10:59:43.038650  175842 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:59:43.038956  175842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
I0929 10:59:43.039565  175842 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:59:43.040218  175842 config.go:182] Loaded profile config "functional-992121": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:59:43.040670  175842 cli_runner.go:164] Run: docker container inspect functional-992121 --format={{.State.Status}}
I0929 10:59:43.058668  175842 ssh_runner.go:195] Run: systemctl --version
I0929 10:59:43.058724  175842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992121
I0929 10:59:43.076913  175842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/functional-992121/id_rsa Username:docker}
I0929 10:59:43.169915  175842 build_images.go:161] Building image from path: /tmp/build.1498545710.tar
I0929 10:59:43.170010  175842 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 10:59:43.179521  175842 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1498545710.tar
I0929 10:59:43.183611  175842 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1498545710.tar: stat -c "%s %y" /var/lib/minikube/build/build.1498545710.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1498545710.tar': No such file or directory
I0929 10:59:43.183679  175842 ssh_runner.go:362] scp /tmp/build.1498545710.tar --> /var/lib/minikube/build/build.1498545710.tar (3072 bytes)
I0929 10:59:43.211473  175842 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1498545710
I0929 10:59:43.222469  175842 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1498545710 -xf /var/lib/minikube/build/build.1498545710.tar
I0929 10:59:43.231986  175842 crio.go:315] Building image: /var/lib/minikube/build/build.1498545710
I0929 10:59:43.232049  175842 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-992121 /var/lib/minikube/build/build.1498545710 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0929 10:59:49.296604  175842 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-992121 /var/lib/minikube/build/build.1498545710 --cgroup-manager=cgroupfs: (6.064524165s)
I0929 10:59:49.296689  175842 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1498545710
I0929 10:59:49.306312  175842 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1498545710.tar
I0929 10:59:49.315906  175842 build_images.go:217] Built localhost/my-image:functional-992121 from /tmp/build.1498545710.tar
I0929 10:59:49.315937  175842 build_images.go:133] succeeded building to: functional-992121
I0929 10:59:49.315941  175842 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.755262527s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-992121
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image load --daemon kicbase/echo-server:functional-992121 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image load --daemon kicbase/echo-server:functional-992121 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-992121
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image load --daemon kicbase/echo-server:functional-992121 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-992121 image load --daemon kicbase/echo-server:functional-992121 --alsologtostderr: (1.166782876s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image save kicbase/echo-server:functional-992121 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image rm kicbase/echo-server:functional-992121 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-992121
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 image save --daemon kicbase/echo-server:functional-992121 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-992121
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-992121 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.131.202 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-992121 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-992121 service list: (1.689834785s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-992121 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-992121 service list -o json: (1.682506215s)
functional_test.go:1504: Took "1.682597589s" to run "out/minikube-linux-amd64 -p functional-992121 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-992121
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-992121
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-992121
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (165.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0929 11:12:24.357156  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-657171 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m45.016433426s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (165.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-657171 kubectl -- rollout status deployment/busybox: (5.849161688s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-fpfzw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-sngtm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-vmhbq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-fpfzw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-sngtm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-vmhbq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-fpfzw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-sngtm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-vmhbq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-fpfzw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-fpfzw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-sngtm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-sngtm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-vmhbq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 kubectl -- exec busybox-7b57f96db7-vmhbq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-657171 node add --alsologtostderr -v 5: (53.882644035s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-657171 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp testdata/cp-test.txt ha-657171:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2751770802/001/cp-test_ha-657171.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171:/home/docker/cp-test.txt ha-657171-m02:/home/docker/cp-test_ha-657171_ha-657171-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m02 "sudo cat /home/docker/cp-test_ha-657171_ha-657171-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171:/home/docker/cp-test.txt ha-657171-m03:/home/docker/cp-test_ha-657171_ha-657171-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m03 "sudo cat /home/docker/cp-test_ha-657171_ha-657171-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171:/home/docker/cp-test.txt ha-657171-m04:/home/docker/cp-test_ha-657171_ha-657171-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m04 "sudo cat /home/docker/cp-test_ha-657171_ha-657171-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp testdata/cp-test.txt ha-657171-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2751770802/001/cp-test_ha-657171-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171-m02:/home/docker/cp-test.txt ha-657171:/home/docker/cp-test_ha-657171-m02_ha-657171.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171 "sudo cat /home/docker/cp-test_ha-657171-m02_ha-657171.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171-m02:/home/docker/cp-test.txt ha-657171-m03:/home/docker/cp-test_ha-657171-m02_ha-657171-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m03 "sudo cat /home/docker/cp-test_ha-657171-m02_ha-657171-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171-m02:/home/docker/cp-test.txt ha-657171-m04:/home/docker/cp-test_ha-657171-m02_ha-657171-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m04 "sudo cat /home/docker/cp-test_ha-657171-m02_ha-657171-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp testdata/cp-test.txt ha-657171-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2751770802/001/cp-test_ha-657171-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171-m03:/home/docker/cp-test.txt ha-657171:/home/docker/cp-test_ha-657171-m03_ha-657171.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171 "sudo cat /home/docker/cp-test_ha-657171-m03_ha-657171.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171-m03:/home/docker/cp-test.txt ha-657171-m02:/home/docker/cp-test_ha-657171-m03_ha-657171-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m02 "sudo cat /home/docker/cp-test_ha-657171-m03_ha-657171-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171-m03:/home/docker/cp-test.txt ha-657171-m04:/home/docker/cp-test_ha-657171-m03_ha-657171-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m04 "sudo cat /home/docker/cp-test_ha-657171-m03_ha-657171-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp testdata/cp-test.txt ha-657171-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2751770802/001/cp-test_ha-657171-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171-m04:/home/docker/cp-test.txt ha-657171:/home/docker/cp-test_ha-657171-m04_ha-657171.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171 "sudo cat /home/docker/cp-test_ha-657171-m04_ha-657171.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171-m04:/home/docker/cp-test.txt ha-657171-m02:/home/docker/cp-test_ha-657171-m04_ha-657171-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m02 "sudo cat /home/docker/cp-test_ha-657171-m04_ha-657171-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 cp ha-657171-m04:/home/docker/cp-test.txt ha-657171-m03:/home/docker/cp-test_ha-657171-m04_ha-657171-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 ssh -n ha-657171-m03 "sudo cat /home/docker/cp-test_ha-657171-m04_ha-657171-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 node stop m02 --alsologtostderr -v 5
E0929 11:13:47.426444  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-657171 node stop m02 --alsologtostderr -v 5: (13.479243218s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-657171 status --alsologtostderr -v 5: exit status 7 (700.143148ms)

                                                
                                                
-- stdout --
	ha-657171
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-657171-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-657171-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-657171-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:14:00.132529  202135 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:14:00.132861  202135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:14:00.132874  202135 out.go:374] Setting ErrFile to fd 2...
	I0929 11:14:00.132879  202135 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:14:00.133136  202135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
	I0929 11:14:00.133373  202135 out.go:368] Setting JSON to false
	I0929 11:14:00.133408  202135 mustload.go:65] Loading cluster: ha-657171
	I0929 11:14:00.133521  202135 notify.go:220] Checking for updates...
	I0929 11:14:00.133939  202135 config.go:182] Loaded profile config "ha-657171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:14:00.133964  202135 status.go:174] checking status of ha-657171 ...
	I0929 11:14:00.134513  202135 cli_runner.go:164] Run: docker container inspect ha-657171 --format={{.State.Status}}
	I0929 11:14:00.154075  202135 status.go:371] ha-657171 host status = "Running" (err=<nil>)
	I0929 11:14:00.154099  202135 host.go:66] Checking if "ha-657171" exists ...
	I0929 11:14:00.154397  202135 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-657171
	I0929 11:14:00.173841  202135 host.go:66] Checking if "ha-657171" exists ...
	I0929 11:14:00.174147  202135 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:14:00.174189  202135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-657171
	I0929 11:14:00.193510  202135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/ha-657171/id_rsa Username:docker}
	I0929 11:14:00.288069  202135 ssh_runner.go:195] Run: systemctl --version
	I0929 11:14:00.292564  202135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:14:00.305498  202135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:14:00.367989  202135 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 11:14:00.356666757 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:14:00.368971  202135 kubeconfig.go:125] found "ha-657171" server: "https://192.168.49.254:8443"
	I0929 11:14:00.369018  202135 api_server.go:166] Checking apiserver status ...
	I0929 11:14:00.369061  202135 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:14:00.381187  202135 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup
	W0929 11:14:00.390730  202135 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1459/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:14:00.390789  202135 ssh_runner.go:195] Run: ls
	I0929 11:14:00.394305  202135 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 11:14:00.398400  202135 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 11:14:00.398427  202135 status.go:463] ha-657171 apiserver status = Running (err=<nil>)
	I0929 11:14:00.398442  202135 status.go:176] ha-657171 status: &{Name:ha-657171 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:14:00.398467  202135 status.go:174] checking status of ha-657171-m02 ...
	I0929 11:14:00.398786  202135 cli_runner.go:164] Run: docker container inspect ha-657171-m02 --format={{.State.Status}}
	I0929 11:14:00.417254  202135 status.go:371] ha-657171-m02 host status = "Stopped" (err=<nil>)
	I0929 11:14:00.417296  202135 status.go:384] host is not running, skipping remaining checks
	I0929 11:14:00.417314  202135 status.go:176] ha-657171-m02 status: &{Name:ha-657171-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:14:00.417370  202135 status.go:174] checking status of ha-657171-m03 ...
	I0929 11:14:00.417788  202135 cli_runner.go:164] Run: docker container inspect ha-657171-m03 --format={{.State.Status}}
	I0929 11:14:00.436550  202135 status.go:371] ha-657171-m03 host status = "Running" (err=<nil>)
	I0929 11:14:00.436578  202135 host.go:66] Checking if "ha-657171-m03" exists ...
	I0929 11:14:00.436853  202135 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-657171-m03
	I0929 11:14:00.457040  202135 host.go:66] Checking if "ha-657171-m03" exists ...
	I0929 11:14:00.457347  202135 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:14:00.457390  202135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-657171-m03
	I0929 11:14:00.478137  202135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/ha-657171-m03/id_rsa Username:docker}
	I0929 11:14:00.572110  202135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:14:00.584026  202135 kubeconfig.go:125] found "ha-657171" server: "https://192.168.49.254:8443"
	I0929 11:14:00.584054  202135 api_server.go:166] Checking apiserver status ...
	I0929 11:14:00.584087  202135 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:14:00.594703  202135 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup
	W0929 11:14:00.606236  202135 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1370/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:14:00.606292  202135 ssh_runner.go:195] Run: ls
	I0929 11:14:00.610355  202135 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 11:14:00.614366  202135 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 11:14:00.614392  202135 status.go:463] ha-657171-m03 apiserver status = Running (err=<nil>)
	I0929 11:14:00.614401  202135 status.go:176] ha-657171-m03 status: &{Name:ha-657171-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:14:00.614419  202135 status.go:174] checking status of ha-657171-m04 ...
	I0929 11:14:00.614639  202135 cli_runner.go:164] Run: docker container inspect ha-657171-m04 --format={{.State.Status}}
	I0929 11:14:00.637266  202135 status.go:371] ha-657171-m04 host status = "Running" (err=<nil>)
	I0929 11:14:00.637294  202135 host.go:66] Checking if "ha-657171-m04" exists ...
	I0929 11:14:00.637556  202135 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-657171-m04
	I0929 11:14:00.655985  202135 host.go:66] Checking if "ha-657171-m04" exists ...
	I0929 11:14:00.656276  202135 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:14:00.656326  202135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-657171-m04
	I0929 11:14:00.674456  202135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/ha-657171-m04/id_rsa Username:docker}
	I0929 11:14:00.768175  202135 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:14:00.779771  202135 status.go:176] ha-657171-m04 status: &{Name:ha-657171-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 node start m02 --alsologtostderr -v 5
E0929 11:14:06.656415  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:14:06.662840  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:14:06.674204  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:14:06.695649  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:14:06.737736  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:14:06.819982  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:14:06.982239  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:14:07.303951  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:14:07.945214  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:14:09.226705  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-657171 node start m02 --alsologtostderr -v 5: (8.208512324s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (114.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 stop --alsologtostderr -v 5
E0929 11:14:11.788708  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:14:16.910373  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:14:27.152564  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:14:47.634058  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-657171 stop --alsologtostderr -v 5: (43.338294974s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 start --wait true --alsologtostderr -v 5
E0929 11:15:28.595960  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-657171 start --wait true --alsologtostderr -v 5: (1m10.830347096s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (114.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-657171 node delete m03 --alsologtostderr -v 5: (10.558591845s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 stop --alsologtostderr -v 5
E0929 11:16:50.521063  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-657171 stop --alsologtostderr -v 5: (43.443584318s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-657171 status --alsologtostderr -v 5: exit status 7 (111.833258ms)

                                                
                                                
-- stdout --
	ha-657171
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-657171-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-657171-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:17:01.360461  218646 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:17:01.360748  218646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:17:01.360760  218646 out.go:374] Setting ErrFile to fd 2...
	I0929 11:17:01.360765  218646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:17:01.361010  218646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
	I0929 11:17:01.361248  218646 out.go:368] Setting JSON to false
	I0929 11:17:01.361279  218646 mustload.go:65] Loading cluster: ha-657171
	I0929 11:17:01.361376  218646 notify.go:220] Checking for updates...
	I0929 11:17:01.361748  218646 config.go:182] Loaded profile config "ha-657171": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:17:01.361773  218646 status.go:174] checking status of ha-657171 ...
	I0929 11:17:01.362261  218646 cli_runner.go:164] Run: docker container inspect ha-657171 --format={{.State.Status}}
	I0929 11:17:01.383993  218646 status.go:371] ha-657171 host status = "Stopped" (err=<nil>)
	I0929 11:17:01.384014  218646 status.go:384] host is not running, skipping remaining checks
	I0929 11:17:01.384020  218646 status.go:176] ha-657171 status: &{Name:ha-657171 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:17:01.384053  218646 status.go:174] checking status of ha-657171-m02 ...
	I0929 11:17:01.384294  218646 cli_runner.go:164] Run: docker container inspect ha-657171-m02 --format={{.State.Status}}
	I0929 11:17:01.403184  218646 status.go:371] ha-657171-m02 host status = "Stopped" (err=<nil>)
	I0929 11:17:01.403218  218646 status.go:384] host is not running, skipping remaining checks
	I0929 11:17:01.403225  218646 status.go:176] ha-657171-m02 status: &{Name:ha-657171-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:17:01.403252  218646 status.go:174] checking status of ha-657171-m04 ...
	I0929 11:17:01.403554  218646 cli_runner.go:164] Run: docker container inspect ha-657171-m04 --format={{.State.Status}}
	I0929 11:17:01.422353  218646 status.go:371] ha-657171-m04 host status = "Stopped" (err=<nil>)
	I0929 11:17:01.422373  218646 status.go:384] host is not running, skipping remaining checks
	I0929 11:17:01.422379  218646 status.go:176] ha-657171-m04 status: &{Name:ha-657171-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0929 11:17:24.358031  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-657171 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.79934481s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-657171 node add --control-plane --alsologtostderr -v 5: (40.889458559s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-657171 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.45s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-421871 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0929 11:19:06.657019  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:19:34.362569  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-421871 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m9.448789629s)
--- PASS: TestJSONOutput/start/Command (69.45s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-421871 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-421871 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-421871 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-421871 --output=json --user=testUser: (7.986899052s)
--- PASS: TestJSONOutput/stop/Command (7.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-144633 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-144633 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (66.88593ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c42c37e2-7928-472a-aa0b-a7686a90cad6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-144633] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0f641c0f-878e-4e7a-8702-c8721bd55003","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21656"}}
	{"specversion":"1.0","id":"2b0df840-22ef-4278-a52f-41f5d00601bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b66a9139-c4de-4065-9149-6db25ced4731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig"}}
	{"specversion":"1.0","id":"eb502026-2b6f-4881-9f4a-61ca72b233bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube"}}
	{"specversion":"1.0","id":"a080f48f-710d-477c-9027-730e1150b240","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"278e3696-3a79-4fd1-ab1a-ebc142a9ac07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e311ccaa-7fe3-4164-ae85-e1bb2c5310e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-144633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-144633
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-548153 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-548153 --network=: (32.933334259s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-548153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-548153
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-548153: (2.101199231s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.06s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.74s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-617976 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-617976 --network=bridge: (22.782556865s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-617976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-617976
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-617976: (1.936150671s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.74s)

                                                
                                    
x
+
TestKicExistingNetwork (22.33s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0929 11:21:13.283477  132495 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0929 11:21:13.302100  132495 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0929 11:21:13.302194  132495 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0929 11:21:13.302215  132495 cli_runner.go:164] Run: docker network inspect existing-network
W0929 11:21:13.320736  132495 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0929 11:21:13.320768  132495 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0929 11:21:13.320782  132495 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0929 11:21:13.320950  132495 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0929 11:21:13.338552  132495 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-18629602bdc2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:e6:ee:81:fe:38} reservation:<nil>}
I0929 11:21:13.339015  132495 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002dffd0}
I0929 11:21:13.339058  132495 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0929 11:21:13.339115  132495 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0929 11:21:13.396575  132495 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-597410 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-597410 --network=existing-network: (20.248143341s)
helpers_test.go:175: Cleaning up "existing-network-597410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-597410
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-597410: (1.933710388s)
I0929 11:21:35.596206  132495 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.33s)

                                                
                                    
x
+
TestKicCustomSubnet (24.75s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-253633 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-253633 --subnet=192.168.60.0/24: (22.584458002s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-253633 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-253633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-253633
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-253633: (2.14114943s)
--- PASS: TestKicCustomSubnet (24.75s)

                                                
                                    
x
+
TestKicStaticIP (26.22s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-376773 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-376773 --static-ip=192.168.200.200: (23.967266137s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-376773 ip
E0929 11:22:24.357183  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:175: Cleaning up "static-ip-376773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-376773
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-376773: (2.120267025s)
--- PASS: TestKicStaticIP (26.22s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (46.79s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-071434 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-071434 --driver=docker  --container-runtime=crio: (20.679898565s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-086012 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-086012 --driver=docker  --container-runtime=crio: (20.246926347s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-071434
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-086012
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-086012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-086012
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-086012: (2.31585977s)
helpers_test.go:175: Cleaning up "first-071434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-071434
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-071434: (2.346961023s)
--- PASS: TestMinikubeProfile (46.79s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-387046 --memory=3072 --mount-string /tmp/TestMountStartserial929632035/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-387046 --memory=3072 --mount-string /tmp/TestMountStartserial929632035/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.238348497s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-387046 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-399438 --memory=3072 --mount-string /tmp/TestMountStartserial929632035/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-399438 --memory=3072 --mount-string /tmp/TestMountStartserial929632035/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.554208001s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-399438 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-387046 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-387046 --alsologtostderr -v=5: (1.659840691s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-399438 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-399438
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-399438: (1.1913527s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-399438
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-399438: (6.946276363s)
--- PASS: TestMountStart/serial/RestartStopped (7.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-399438 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (93.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-708795 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0929 11:24:06.657287  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-708795 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m33.345147048s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (93.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-708795 -- rollout status deployment/busybox: (4.072423193s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- exec busybox-7b57f96db7-pgqsp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- exec busybox-7b57f96db7-sx9dd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- exec busybox-7b57f96db7-pgqsp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- exec busybox-7b57f96db7-sx9dd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- exec busybox-7b57f96db7-pgqsp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- exec busybox-7b57f96db7-sx9dd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.53s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- exec busybox-7b57f96db7-pgqsp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- exec busybox-7b57f96db7-pgqsp -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- exec busybox-7b57f96db7-sx9dd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-708795 -- exec busybox-7b57f96db7-sx9dd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.78s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-708795 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-708795 -v=5 --alsologtostderr: (23.22743904s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.88s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-708795 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 cp testdata/cp-test.txt multinode-708795:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 cp multinode-708795:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1902674806/001/cp-test_multinode-708795.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 cp multinode-708795:/home/docker/cp-test.txt multinode-708795-m02:/home/docker/cp-test_multinode-708795_multinode-708795-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795-m02 "sudo cat /home/docker/cp-test_multinode-708795_multinode-708795-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 cp multinode-708795:/home/docker/cp-test.txt multinode-708795-m03:/home/docker/cp-test_multinode-708795_multinode-708795-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795-m03 "sudo cat /home/docker/cp-test_multinode-708795_multinode-708795-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 cp testdata/cp-test.txt multinode-708795-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 cp multinode-708795-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1902674806/001/cp-test_multinode-708795-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 cp multinode-708795-m02:/home/docker/cp-test.txt multinode-708795:/home/docker/cp-test_multinode-708795-m02_multinode-708795.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795 "sudo cat /home/docker/cp-test_multinode-708795-m02_multinode-708795.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 cp multinode-708795-m02:/home/docker/cp-test.txt multinode-708795-m03:/home/docker/cp-test_multinode-708795-m02_multinode-708795-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795-m03 "sudo cat /home/docker/cp-test_multinode-708795-m02_multinode-708795-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 cp testdata/cp-test.txt multinode-708795-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 cp multinode-708795-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1902674806/001/cp-test_multinode-708795-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 cp multinode-708795-m03:/home/docker/cp-test.txt multinode-708795:/home/docker/cp-test_multinode-708795-m03_multinode-708795.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795 "sudo cat /home/docker/cp-test_multinode-708795-m03_multinode-708795.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 cp multinode-708795-m03:/home/docker/cp-test.txt multinode-708795-m02:/home/docker/cp-test_multinode-708795-m03_multinode-708795-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 ssh -n multinode-708795-m02 "sudo cat /home/docker/cp-test_multinode-708795-m03_multinode-708795-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-708795 node stop m03: (1.296969578s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-708795 status: exit status 7 (507.680633ms)

                                                
                                                
-- stdout --
	multinode-708795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-708795-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-708795-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-708795 status --alsologtostderr: exit status 7 (499.203648ms)

                                                
                                                
-- stdout --
	multinode-708795
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-708795-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-708795-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:25:56.144627  281849 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:25:56.144944  281849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:25:56.144956  281849 out.go:374] Setting ErrFile to fd 2...
	I0929 11:25:56.144963  281849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:25:56.145155  281849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
	I0929 11:25:56.145350  281849 out.go:368] Setting JSON to false
	I0929 11:25:56.145387  281849 mustload.go:65] Loading cluster: multinode-708795
	I0929 11:25:56.145464  281849 notify.go:220] Checking for updates...
	I0929 11:25:56.145870  281849 config.go:182] Loaded profile config "multinode-708795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:25:56.145897  281849 status.go:174] checking status of multinode-708795 ...
	I0929 11:25:56.146377  281849 cli_runner.go:164] Run: docker container inspect multinode-708795 --format={{.State.Status}}
	I0929 11:25:56.166667  281849 status.go:371] multinode-708795 host status = "Running" (err=<nil>)
	I0929 11:25:56.166708  281849 host.go:66] Checking if "multinode-708795" exists ...
	I0929 11:25:56.167041  281849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-708795
	I0929 11:25:56.186083  281849 host.go:66] Checking if "multinode-708795" exists ...
	I0929 11:25:56.186457  281849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:25:56.186515  281849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-708795
	I0929 11:25:56.205191  281849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/multinode-708795/id_rsa Username:docker}
	I0929 11:25:56.302638  281849 ssh_runner.go:195] Run: systemctl --version
	I0929 11:25:56.307535  281849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:25:56.320512  281849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:25:56.379215  281849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 11:25:56.367093659 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:25:56.380014  281849 kubeconfig.go:125] found "multinode-708795" server: "https://192.168.67.2:8443"
	I0929 11:25:56.380057  281849 api_server.go:166] Checking apiserver status ...
	I0929 11:25:56.380101  281849 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:25:56.392168  281849 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	W0929 11:25:56.402839  281849 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:25:56.402908  281849 ssh_runner.go:195] Run: ls
	I0929 11:25:56.406769  281849 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0929 11:25:56.410897  281849 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0929 11:25:56.410919  281849 status.go:463] multinode-708795 apiserver status = Running (err=<nil>)
	I0929 11:25:56.410929  281849 status.go:176] multinode-708795 status: &{Name:multinode-708795 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:25:56.410945  281849 status.go:174] checking status of multinode-708795-m02 ...
	I0929 11:25:56.411164  281849 cli_runner.go:164] Run: docker container inspect multinode-708795-m02 --format={{.State.Status}}
	I0929 11:25:56.430694  281849 status.go:371] multinode-708795-m02 host status = "Running" (err=<nil>)
	I0929 11:25:56.430724  281849 host.go:66] Checking if "multinode-708795-m02" exists ...
	I0929 11:25:56.430999  281849 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-708795-m02
	I0929 11:25:56.449549  281849 host.go:66] Checking if "multinode-708795-m02" exists ...
	I0929 11:25:56.449807  281849 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:25:56.449863  281849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-708795-m02
	I0929 11:25:56.468270  281849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21656-128977/.minikube/machines/multinode-708795-m02/id_rsa Username:docker}
	I0929 11:25:56.561044  281849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:25:56.573776  281849 status.go:176] multinode-708795-m02 status: &{Name:multinode-708795-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:25:56.573840  281849 status.go:174] checking status of multinode-708795-m03 ...
	I0929 11:25:56.574082  281849 cli_runner.go:164] Run: docker container inspect multinode-708795-m03 --format={{.State.Status}}
	I0929 11:25:56.592442  281849 status.go:371] multinode-708795-m03 host status = "Stopped" (err=<nil>)
	I0929 11:25:56.592467  281849 status.go:384] host is not running, skipping remaining checks
	I0929 11:25:56.592476  281849 status.go:176] multinode-708795-m03 status: &{Name:multinode-708795-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-708795 node start m03 -v=5 --alsologtostderr: (6.554436668s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (83.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-708795
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-708795
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-708795: (29.548720315s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-708795 --wait=true -v=5 --alsologtostderr
E0929 11:27:24.357616  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-708795 --wait=true -v=5 --alsologtostderr: (54.016729196s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-708795
--- PASS: TestMultiNode/serial/RestartKeepsNodes (83.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-708795 node delete m03: (4.664072504s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-708795 stop: (28.46783893s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-708795 status: exit status 7 (93.499424ms)

                                                
                                                
-- stdout --
	multinode-708795
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-708795-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-708795 status --alsologtostderr: exit status 7 (88.597122ms)

                                                
                                                
-- stdout --
	multinode-708795
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-708795-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:28:01.402589  292092 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:28:01.402942  292092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:28:01.402954  292092 out.go:374] Setting ErrFile to fd 2...
	I0929 11:28:01.402959  292092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:28:01.403168  292092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
	I0929 11:28:01.403340  292092 out.go:368] Setting JSON to false
	I0929 11:28:01.403366  292092 mustload.go:65] Loading cluster: multinode-708795
	I0929 11:28:01.403413  292092 notify.go:220] Checking for updates...
	I0929 11:28:01.403735  292092 config.go:182] Loaded profile config "multinode-708795": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:28:01.403756  292092 status.go:174] checking status of multinode-708795 ...
	I0929 11:28:01.404189  292092 cli_runner.go:164] Run: docker container inspect multinode-708795 --format={{.State.Status}}
	I0929 11:28:01.423170  292092 status.go:371] multinode-708795 host status = "Stopped" (err=<nil>)
	I0929 11:28:01.423196  292092 status.go:384] host is not running, skipping remaining checks
	I0929 11:28:01.423204  292092 status.go:176] multinode-708795 status: &{Name:multinode-708795 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:28:01.423254  292092 status.go:174] checking status of multinode-708795-m02 ...
	I0929 11:28:01.423583  292092 cli_runner.go:164] Run: docker container inspect multinode-708795-m02 --format={{.State.Status}}
	I0929 11:28:01.442588  292092 status.go:371] multinode-708795-m02 host status = "Stopped" (err=<nil>)
	I0929 11:28:01.442617  292092 status.go:384] host is not running, skipping remaining checks
	I0929 11:28:01.442636  292092 status.go:176] multinode-708795-m02 status: &{Name:multinode-708795-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.65s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-708795 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-708795 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.919438945s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-708795 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.51s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-708795
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-708795-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-708795-m02 --driver=docker  --container-runtime=crio: exit status 14 (66.061757ms)

                                                
                                                
-- stdout --
	* [multinode-708795-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-708795-m02' is duplicated with machine name 'multinode-708795-m02' in profile 'multinode-708795'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-708795-m03 --driver=docker  --container-runtime=crio
E0929 11:29:06.657012  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-708795-m03 --driver=docker  --container-runtime=crio: (21.409472906s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-708795
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-708795: exit status 80 (284.2619ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-708795 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-708795-m03 already exists in multinode-708795-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-708795-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-708795-m03: (2.3323075s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.14s)

                                                
                                    
x
+
TestPreload (123.67s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-105253 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-105253 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (49.869754347s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-105253 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-105253 image pull gcr.io/k8s-minikube/busybox: (3.140680115s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-105253
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-105253: (5.905636028s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-105253 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0929 11:30:27.428673  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:29.724108  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-105253 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m2.111570565s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-105253 image list
helpers_test.go:175: Cleaning up "test-preload-105253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-105253
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-105253: (2.410190007s)
--- PASS: TestPreload (123.67s)

                                                
                                    
x
+
TestScheduledStopUnix (98.49s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-316812 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-316812 --memory=3072 --driver=docker  --container-runtime=crio: (22.216573303s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-316812 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-316812 -n scheduled-stop-316812
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-316812 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 11:31:44.532943  132495 retry.go:31] will retry after 69.293µs: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.534150  132495 retry.go:31] will retry after 135.668µs: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.535316  132495 retry.go:31] will retry after 243.018µs: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.536469  132495 retry.go:31] will retry after 349.194µs: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.537909  132495 retry.go:31] will retry after 732.759µs: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.539046  132495 retry.go:31] will retry after 481.377µs: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.540190  132495 retry.go:31] will retry after 1.52291ms: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.543071  132495 retry.go:31] will retry after 2.462125ms: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.546377  132495 retry.go:31] will retry after 1.997053ms: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.548628  132495 retry.go:31] will retry after 5.639362ms: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.554930  132495 retry.go:31] will retry after 3.990051ms: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.559221  132495 retry.go:31] will retry after 5.097325ms: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.564471  132495 retry.go:31] will retry after 15.002302ms: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.579748  132495 retry.go:31] will retry after 23.909351ms: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.603994  132495 retry.go:31] will retry after 28.417606ms: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
I0929 11:31:44.633276  132495 retry.go:31] will retry after 37.277664ms: open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/scheduled-stop-316812/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-316812 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-316812 -n scheduled-stop-316812
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-316812
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-316812 --schedule 15s
E0929 11:32:24.363321  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-316812
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-316812: exit status 7 (68.904448ms)

                                                
                                                
-- stdout --
	scheduled-stop-316812
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-316812 -n scheduled-stop-316812
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-316812 -n scheduled-stop-316812: exit status 7 (70.344295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-316812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-316812
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-316812: (4.825350776s)
--- PASS: TestScheduledStopUnix (98.49s)

                                                
                                    
x
+
TestInsufficientStorage (9.8s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-028153 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-028153 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.370474443s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d5896e7d-d1d9-412f-ba61-99c1d1d0e0b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-028153] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"44ac0d32-7b49-4e22-bae2-fb5f123cb07e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21656"}}
	{"specversion":"1.0","id":"5be19c92-8f92-4778-b559-810cec08ac1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2c805f93-8539-460f-b68c-b9eaf74e5cec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig"}}
	{"specversion":"1.0","id":"8d2927c0-8eec-49e1-9086-3b52459b4e78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube"}}
	{"specversion":"1.0","id":"48989783-f9cc-4435-ae4c-7da860aa812c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"be51ccbb-aebc-4876-86df-500442906f39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cccb03a1-8327-4480-b715-d8ce71cdc6e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"76c1e235-13f0-4544-bd9e-37572cadf895","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e7b51f6b-a245-45e9-a4aa-967bc8f23028","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e826951-872f-4d16-91b6-f8d59f400e74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c34004a8-df99-4cb8-b2d0-2b64daf51ae9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-028153\" primary control-plane node in \"insufficient-storage-028153\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ccc11ee9-c87f-45a9-b449-c7cec0aaa0cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac5b123f-cc56-4fac-ad72-867c0a2cd72a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"095b1f7c-1516-4bbb-b9fb-18c44a627f1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-028153 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-028153 --output=json --layout=cluster: exit status 7 (279.55225ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-028153","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-028153","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 11:33:08.003719  314257 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-028153" does not appear in /home/jenkins/minikube-integration/21656-128977/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-028153 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-028153 --output=json --layout=cluster: exit status 7 (281.219037ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-028153","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-028153","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 11:33:08.286114  314362 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-028153" does not appear in /home/jenkins/minikube-integration/21656-128977/kubeconfig
	E0929 11:33:08.296805  314362 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/insufficient-storage-028153/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-028153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-028153
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-028153: (1.866241874s)
--- PASS: TestInsufficientStorage (9.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.4054472534 start -p running-upgrade-226878 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.4054472534 start -p running-upgrade-226878 --memory=3072 --vm-driver=docker  --container-runtime=crio: (22.586985424s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-226878 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-226878 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.458960934s)
helpers_test.go:175: Cleaning up "running-upgrade-226878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-226878
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-226878: (2.466456185s)
--- PASS: TestRunningBinaryUpgrade (50.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (299.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-863316 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-863316 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.496694464s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-863316
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-863316: (1.854557571s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-863316 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-863316 status --format={{.Host}}: exit status 7 (79.319293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-863316 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-863316 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.013682834s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-863316 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-863316 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-863316 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (72.04314ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-863316] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-863316
	    minikube start -p kubernetes-upgrade-863316 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8633162 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-863316 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-863316 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-863316 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.679552328s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-863316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-863316
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-863316: (2.772183845s)
--- PASS: TestKubernetesUpgrade (299.03s)

                                                
                                    
x
+
TestMissingContainerUpgrade (106.02s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.236908378 start -p missing-upgrade-875621 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.236908378 start -p missing-upgrade-875621 --memory=3072 --driver=docker  --container-runtime=crio: (48.771728047s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-875621
E0929 11:34:06.656529  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-875621: (10.422621355s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-875621
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-875621 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-875621 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.742163598s)
helpers_test.go:175: Cleaning up "missing-upgrade-875621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-875621
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-875621: (2.570329677s)
--- PASS: TestMissingContainerUpgrade (106.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-639954 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-639954 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (83.94844ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-639954] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-639954 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-639954 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.418095725s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-639954 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-639954 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-639954 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.880354327s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-639954 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-639954 status -o json: exit status 2 (304.587665ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-639954","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-639954
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-639954: (1.99345608s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-639954 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-639954 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.056919776s)
--- PASS: TestNoKubernetes/serial/Start (6.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-639954 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-639954 "sudo systemctl is-active --quiet service kubelet": exit status 1 (267.346351ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-639954
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-639954: (1.222730029s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-639954 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-639954 --driver=docker  --container-runtime=crio: (8.668201227s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-639954 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-639954 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.885143ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (46.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3646725160 start -p stopped-upgrade-358699 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3646725160 start -p stopped-upgrade-358699 --memory=3072 --vm-driver=docker  --container-runtime=crio: (29.045954026s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3646725160 -p stopped-upgrade-358699 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3646725160 -p stopped-upgrade-358699 stop: (2.45984343s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-358699 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-358699 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.643587949s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (46.15s)

                                                
                                    
x
+
TestPause/serial/Start (44.69s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-747165 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-747165 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (44.690742799s)
--- PASS: TestPause/serial/Start (44.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-358699
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-358699: (1.059173356s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-912363 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-912363 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (189.296607ms)

                                                
                                                
-- stdout --
	* [false-912363] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21656
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:35:27.736310  353729 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:35:27.736589  353729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:35:27.736601  353729 out.go:374] Setting ErrFile to fd 2...
	I0929 11:35:27.736608  353729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:35:27.736937  353729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21656-128977/.minikube/bin
	I0929 11:35:27.737676  353729 out.go:368] Setting JSON to false
	I0929 11:35:27.739158  353729 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4666,"bootTime":1759141062,"procs":398,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:35:27.739262  353729 start.go:140] virtualization: kvm guest
	I0929 11:35:27.741474  353729 out.go:179] * [false-912363] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:35:27.742857  353729 notify.go:220] Checking for updates...
	I0929 11:35:27.742923  353729 out.go:179]   - MINIKUBE_LOCATION=21656
	I0929 11:35:27.744303  353729 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:35:27.745819  353729 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21656-128977/kubeconfig
	I0929 11:35:27.747028  353729 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21656-128977/.minikube
	I0929 11:35:27.748232  353729 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:35:27.749440  353729 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:35:27.753285  353729 config.go:182] Loaded profile config "kubernetes-upgrade-863316": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:35:27.753449  353729 config.go:182] Loaded profile config "pause-747165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:35:27.753559  353729 config.go:182] Loaded profile config "running-upgrade-226878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I0929 11:35:27.753718  353729 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:35:27.783101  353729 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:35:27.783299  353729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:35:27.854771  353729 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-29 11:35:27.843111327 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:35:27.854953  353729 docker.go:318] overlay module found
	I0929 11:35:27.859966  353729 out.go:179] * Using the docker driver based on user configuration
	I0929 11:35:27.861221  353729 start.go:304] selected driver: docker
	I0929 11:35:27.861247  353729 start.go:924] validating driver "docker" against <nil>
	I0929 11:35:27.861265  353729 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:35:27.863329  353729 out.go:203] 
	W0929 11:35:27.864841  353729 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0929 11:35:27.866337  353729 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-912363 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-912363

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-912363

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-912363

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-912363

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-912363

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-912363

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-912363

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-912363

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-912363

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-912363

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-912363

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-912363" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-912363" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21656-128977/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:34:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-863316
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21656-128977/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:35:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-747165
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21656-128977/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:35:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-226878
contexts:
- context:
cluster: kubernetes-upgrade-863316
user: kubernetes-upgrade-863316
name: kubernetes-upgrade-863316
- context:
cluster: pause-747165
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:35:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-747165
name: pause-747165
- context:
cluster: running-upgrade-226878
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:35:27 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: running-upgrade-226878
name: running-upgrade-226878
current-context: running-upgrade-226878
kind: Config
users:
- name: kubernetes-upgrade-863316
user:
client-certificate: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/kubernetes-upgrade-863316/client.crt
client-key: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/kubernetes-upgrade-863316/client.key
- name: pause-747165
user:
client-certificate: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/pause-747165/client.crt
client-key: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/pause-747165/client.key
- name: running-upgrade-226878
user:
client-certificate: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/running-upgrade-226878/client.crt
client-key: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/running-upgrade-226878/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-912363

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-912363"

                                                
                                                
----------------------- debugLogs end: false-912363 [took: 3.057904339s] --------------------------------
helpers_test.go:175: Cleaning up "false-912363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-912363
--- PASS: TestNetworkPlugins/group/false (3.41s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-747165 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-747165 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.986645233s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.00s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-747165 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-747165 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-747165 --output=json --layout=cluster: exit status 2 (365.705289ms)

                                                
                                                
-- stdout --
	{"Name":"pause-747165","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-747165","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-747165 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.74s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.84s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-747165 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.84s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.91s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-747165 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-747165 --alsologtostderr -v=5: (2.911930459s)
--- PASS: TestPause/serial/DeletePaused (2.91s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.81s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-747165
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-747165: exit status 1 (16.979023ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-747165: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (51.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-761200 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-761200 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.278320585s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (51.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-818699 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-818699 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (54.026094832s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-761200 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c5e77fde-5eb5-4ccb-a9a1-7b3d96ad5b44] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c5e77fde-5eb5-4ccb-a9a1-7b3d96ad5b44] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.00355267s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-761200 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-761200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-761200 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-761200 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-761200 --alsologtostderr -v=3: (16.161015889s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-818699 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3d1c6c37-488a-46ab-998b-9c15cfb749e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3d1c6c37-488a-46ab-998b-9c15cfb749e7] Running
E0929 11:37:24.357262  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003386408s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-818699 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-818699 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-818699 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-761200 -n old-k8s-version-761200
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-761200 -n old-k8s-version-761200: exit status 7 (74.979963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-761200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-761200 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-761200 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.059907595s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-761200 -n old-k8s-version-761200
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-818699 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-818699 --alsologtostderr -v=3: (16.471952894s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-818699 -n no-preload-818699
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-818699 -n no-preload-818699: exit status 7 (73.247838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-818699 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (45.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-818699 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-818699 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (45.075865273s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-818699 -n no-preload-818699
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (45.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xxhgh" [c78a3b59-7a46-499e-b47a-36ca31995bdf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003264971s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xxhgh" [c78a3b59-7a46-499e-b47a-36ca31995bdf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003803195s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-761200 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pl9gl" [e185a9a5-9113-4a1e-908a-e7a6bf59b055] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003657296s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-761200 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-761200 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-761200 -n old-k8s-version-761200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-761200 -n old-k8s-version-761200: exit status 2 (329.277359ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-761200 -n old-k8s-version-761200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-761200 -n old-k8s-version-761200: exit status 2 (316.365996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-761200 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-761200 -n old-k8s-version-761200
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-761200 -n old-k8s-version-761200
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pl9gl" [e185a9a5-9113-4a1e-908a-e7a6bf59b055] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003793026s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-818699 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (72.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-386342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-386342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m12.068565812s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (72.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-818699 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-818699 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-818699 --alsologtostderr -v=1: (1.154752248s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-818699 -n no-preload-818699
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-818699 -n no-preload-818699: exit status 2 (397.623127ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-818699 -n no-preload-818699
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-818699 -n no-preload-818699: exit status 2 (408.224965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-818699 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-818699 -n no-preload-818699
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-818699 -n no-preload-818699
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-087424 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-087424 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (42.187946571s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-286433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-286433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (30.659902366s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0929 11:39:06.656531  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/functional-992121/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (40.903044943s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-286433 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-286433 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-286433 --alsologtostderr -v=3: (7.948012561s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-286433 -n newest-cni-286433
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-286433 -n newest-cni-286433: exit status 7 (73.087894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-286433 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-286433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-286433 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (10.763065406s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-286433 -n newest-cni-286433
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-087424 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f0c09d80-dca1-467b-8634-10211257100b] Pending
helpers_test.go:352: "busybox" [f0c09d80-dca1-467b-8634-10211257100b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f0c09d80-dca1-467b-8634-10211257100b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004328709s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-087424 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-087424 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-087424 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-286433 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-087424 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-087424 --alsologtostderr -v=3: (18.223022551s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-286433 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-286433 -n newest-cni-286433
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-286433 -n newest-cni-286433: exit status 2 (313.802053ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-286433 -n newest-cni-286433
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-286433 -n newest-cni-286433: exit status 2 (325.625464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-286433 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-286433 -n newest-cni-286433
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-286433 -n newest-cni-286433
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (43.087755147s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-912363 "pgrep -a kubelet"
I0929 11:39:46.412161  132495 config.go:182] Loaded profile config "auto-912363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-912363 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c4qgh" [2b5f116b-6973-4316-b070-f8cbabe8da18] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c4qgh" [2b5f116b-6973-4316-b070-f8cbabe8da18] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003742313s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-386342 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a1dafcf6-0d37-4fe1-bb9d-4d8d30084224] Pending
helpers_test.go:352: "busybox" [a1dafcf6-0d37-4fe1-bb9d-4d8d30084224] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a1dafcf6-0d37-4fe1-bb9d-4d8d30084224] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004699266s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-386342 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-912363 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-087424 -n default-k8s-diff-port-087424
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-087424 -n default-k8s-diff-port-087424: exit status 7 (86.282219ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-087424 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-087424 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-087424 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (51.991477651s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-087424 -n default-k8s-diff-port-087424
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-386342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-386342 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-386342 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-386342 --alsologtostderr -v=3: (16.388047835s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386342 -n embed-certs-386342
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386342 -n embed-certs-386342: exit status 7 (77.449201ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-386342 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (52.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-386342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-386342 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (52.326987339s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-386342 -n embed-certs-386342
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (52.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.526419915s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-wnzt2" [8e338e7a-ac7e-4e65-8c9b-c77e01d91356] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004489581s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-912363 "pgrep -a kubelet"
I0929 11:40:34.368920  132495 config.go:182] Loaded profile config "kindnet-912363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-912363 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hbwfx" [04ab933c-9967-47f5-b3d7-1f303417cdb5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hbwfx" [04ab933c-9967-47f5-b3d7-1f303417cdb5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.002877928s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-912363 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pvt4j" [0fe7f7fc-dca4-42cf-a8c7-a6a0b6eda705] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003635328s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pvt4j" [0fe7f7fc-dca4-42cf-a8c7-a6a0b6eda705] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006892813s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-087424 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-087424 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-087424 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-087424 -n default-k8s-diff-port-087424
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-087424 -n default-k8s-diff-port-087424: exit status 2 (428.994032ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-087424 -n default-k8s-diff-port-087424
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-087424 -n default-k8s-diff-port-087424: exit status 2 (322.933192ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-087424 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-087424 -n default-k8s-diff-port-087424
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-087424 -n default-k8s-diff-port-087424
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (56.547093439s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (73.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m13.603868161s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (73.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dnvgq" [900f6afb-1c4c-411a-9ae4-76ab2153d51e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003243174s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-2kjs4" [f4a978f3-039c-4231-b76a-05aca369b8f2] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003838003s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dnvgq" [900f6afb-1c4c-411a-9ae4-76ab2153d51e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004083493s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-386342 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-912363 "pgrep -a kubelet"
I0929 11:41:15.635404  132495 config.go:182] Loaded profile config "calico-912363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-912363 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lqxnh" [15efadab-df66-437a-9a63-cc1e7c45800a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lqxnh" [15efadab-df66-437a-9a63-cc1e7c45800a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.005645276s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-386342 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-386342 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-386342 -n embed-certs-386342
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-386342 -n embed-certs-386342: exit status 2 (325.262ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-386342 -n embed-certs-386342
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-386342 -n embed-certs-386342: exit status 2 (334.135545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-386342 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-386342 -n embed-certs-386342
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-386342 -n embed-certs-386342
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.94s)
E0929 11:42:37.182788  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/no-preload-818699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-912363 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.111825755s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0929 11:41:57.712995  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/old-k8s-version-761200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:41:57.719489  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/old-k8s-version-761200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:41:57.730941  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/old-k8s-version-761200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:41:57.752406  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/old-k8s-version-761200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:41:57.793930  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/old-k8s-version-761200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:41:57.875490  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/old-k8s-version-761200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:41:58.037203  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/old-k8s-version-761200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:41:58.359254  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/old-k8s-version-761200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:41:59.001166  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/old-k8s-version-761200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:42:00.283005  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/old-k8s-version-761200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-912363 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m8.242122463s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-912363 "pgrep -a kubelet"
I0929 11:42:02.622302  132495 config.go:182] Loaded profile config "custom-flannel-912363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-912363 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gcnh6" [0ae77612-4066-49f5-a9bd-87d9dfef1be1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 11:42:02.845268  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/old-k8s-version-761200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-gcnh6" [0ae77612-4066-49f5-a9bd-87d9dfef1be1] Running
E0929 11:42:07.967082  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/old-k8s-version-761200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003982776s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-912363 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-912363 "pgrep -a kubelet"
E0929 11:42:21.819585  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/no-preload-818699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0929 11:42:21.958935  132495 config.go:182] Loaded profile config "enable-default-cni-912363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-912363 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pzq5q" [e4286c0b-354f-4f54-9d48-27569d16d68f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pzq5q" [e4286c0b-354f-4f54-9d48-27569d16d68f] Running
E0929 11:42:26.941404  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/no-preload-818699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.005154883s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-njhwj" [471d0f2b-1eb5-4ae4-9a03-63ef4c2f7f86] Running
E0929 11:42:24.357593  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/addons-721094/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004344832s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-912363 "pgrep -a kubelet"
I0929 11:42:28.546035  132495 config.go:182] Loaded profile config "flannel-912363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-912363 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8psnk" [c37dfdc1-c891-4351-bba9-00c709fe36c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8psnk" [c37dfdc1-c891-4351-bba9-00c709fe36c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004556268s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-912363 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-912363 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-912363 "pgrep -a kubelet"
I0929 11:42:56.154221  132495 config.go:182] Loaded profile config "bridge-912363": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-912363 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-n84nh" [d4cf0c91-bfe1-477c-ae8b-f40d33373727] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 11:42:57.664607  132495 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/no-preload-818699/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-n84nh" [d4cf0c91-bfe1-477c-ae8b-f40d33373727] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.0034676s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-912363 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-912363 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (27/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-721094 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-791408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-791408
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-912363 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-912363

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-912363

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-912363

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-912363

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-912363

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-912363

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-912363

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-912363

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-912363

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-912363

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-912363

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-912363" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-912363" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21656-128977/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:34:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-863316
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21656-128977/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:35:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-747165
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21656-128977/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:35:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-226878
contexts:
- context:
cluster: kubernetes-upgrade-863316
user: kubernetes-upgrade-863316
name: kubernetes-upgrade-863316
- context:
cluster: pause-747165
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:35:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-747165
name: pause-747165
- context:
cluster: running-upgrade-226878
user: running-upgrade-226878
name: running-upgrade-226878
current-context: pause-747165
kind: Config
users:
- name: kubernetes-upgrade-863316
user:
client-certificate: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/kubernetes-upgrade-863316/client.crt
client-key: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/kubernetes-upgrade-863316/client.key
- name: pause-747165
user:
client-certificate: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/pause-747165/client.crt
client-key: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/pause-747165/client.key
- name: running-upgrade-226878
user:
client-certificate: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/running-upgrade-226878/client.crt
client-key: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/running-upgrade-226878/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-912363

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-912363"

                                                
                                                
----------------------- debugLogs end: kubenet-912363 [took: 3.385245331s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-912363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-912363
--- SKIP: TestNetworkPlugins/group/kubenet (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
I0929 11:35:31.612182  132495 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate807595857/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 11:35:31.628944  132495 install.go:163] /tmp/TestKVMDriverInstallOrUpdate807595857/001/docker-machine-driver-kvm2 version is 1.37.0
panic.go:636: 
----------------------- debugLogs start: cilium-912363 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-912363" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21656-128977/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:34:24 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-863316
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21656-128977/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:35:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-747165
contexts:
- context:
cluster: kubernetes-upgrade-863316
user: kubernetes-upgrade-863316
name: kubernetes-upgrade-863316
- context:
cluster: pause-747165
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:35:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-747165
name: pause-747165
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-863316
user:
client-certificate: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/kubernetes-upgrade-863316/client.crt
client-key: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/kubernetes-upgrade-863316/client.key
- name: pause-747165
user:
client-certificate: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/pause-747165/client.crt
client-key: /home/jenkins/minikube-integration/21656-128977/.minikube/profiles/pause-747165/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-912363

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-912363" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912363"

                                                
                                                
----------------------- debugLogs end: cilium-912363 [took: 3.595032146s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-912363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-912363
--- SKIP: TestNetworkPlugins/group/cilium (3.81s)

                                                
                                    
Copied to clipboard