Test Report: Docker_Linux_crio 21655

                    
                      f8e963384863fe0b9099940b8c321271fa941d51:2025-09-29:41681
                    
                

Test fail (6/332)

Order failed test Duration
37 TestAddons/parallel/Ingress 155.42
98 TestFunctional/parallel/ServiceCmdConnect 603.12
136 TestFunctional/parallel/ServiceCmd/DeployApp 600.58
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
154 TestFunctional/parallel/ServiceCmd/Format 0.53
155 TestFunctional/parallel/ServiceCmd/URL 0.52
x
+
TestAddons/parallel/Ingress (155.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-164332 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-164332 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-164332 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [7a9cc890-ab89-43e4-be90-7d34bffe66b5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [7a9cc890-ab89-43e4-be90-7d34bffe66b5] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.002824385s
I0929 11:26:12.582469  747468 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-164332 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.77362235s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-164332 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-164332
helpers_test.go:243: (dbg) docker inspect addons-164332:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00d5fa43683ffdd5d5b7a45e01da24724cd40a6feb5097b4910669893d7eec43",
	        "Created": "2025-09-29T11:23:04.513105925Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 749496,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T11:23:04.542309008Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/00d5fa43683ffdd5d5b7a45e01da24724cd40a6feb5097b4910669893d7eec43/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00d5fa43683ffdd5d5b7a45e01da24724cd40a6feb5097b4910669893d7eec43/hostname",
	        "HostsPath": "/var/lib/docker/containers/00d5fa43683ffdd5d5b7a45e01da24724cd40a6feb5097b4910669893d7eec43/hosts",
	        "LogPath": "/var/lib/docker/containers/00d5fa43683ffdd5d5b7a45e01da24724cd40a6feb5097b4910669893d7eec43/00d5fa43683ffdd5d5b7a45e01da24724cd40a6feb5097b4910669893d7eec43-json.log",
	        "Name": "/addons-164332",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-164332:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-164332",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "00d5fa43683ffdd5d5b7a45e01da24724cd40a6feb5097b4910669893d7eec43",
	                "LowerDir": "/var/lib/docker/overlay2/baaf98e9448878709b6fddc931d0f620cc31fb39b4504406605b152ce824b50c-init/diff:/var/lib/docker/overlay2/42045f7131296b05e4732d8df48574b1ff4b00e9dbcd57ed60e11052fef55646/diff",
	                "MergedDir": "/var/lib/docker/overlay2/baaf98e9448878709b6fddc931d0f620cc31fb39b4504406605b152ce824b50c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/baaf98e9448878709b6fddc931d0f620cc31fb39b4504406605b152ce824b50c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/baaf98e9448878709b6fddc931d0f620cc31fb39b4504406605b152ce824b50c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-164332",
	                "Source": "/var/lib/docker/volumes/addons-164332/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-164332",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-164332",
	                "name.minikube.sigs.k8s.io": "addons-164332",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a40c48b6f670a46fde35e013af5b76cd23ef6683f8caf283cefefee81703747",
	            "SandboxKey": "/var/run/docker/netns/1a40c48b6f67",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-164332": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1e:c4:ad:85:4e:e0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bde75b0536e9c3cf9dcc4b066ef9580798f0bedf7bfb540fbe7440edbe5361d7",
	                    "EndpointID": "de4d22be3ac100f9f4cebd91d32649f1991c62b783f83a363f9c34a808fa875c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-164332",
	                        "00d5fa43683f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-164332 -n addons-164332
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-164332 logs -n 25: (1.218120136s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-685633 --alsologtostderr --binary-mirror http://127.0.0.1:46621 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-685633 │ jenkins │ v1.37.0 │ 29 Sep 25 11:22 UTC │                     │
	│ delete  │ -p binary-mirror-685633                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-685633 │ jenkins │ v1.37.0 │ 29 Sep 25 11:22 UTC │ 29 Sep 25 11:22 UTC │
	│ addons  │ disable dashboard -p addons-164332                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:22 UTC │                     │
	│ addons  │ enable dashboard -p addons-164332                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:22 UTC │                     │
	│ start   │ -p addons-164332 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:22 UTC │ 29 Sep 25 11:25 UTC │
	│ addons  │ addons-164332 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:25 UTC │
	│ addons  │ addons-164332 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:25 UTC │
	│ addons  │ enable headlamp -p addons-164332 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:25 UTC │
	│ addons  │ addons-164332 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:25 UTC │
	│ addons  │ addons-164332 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:25 UTC │
	│ addons  │ addons-164332 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:26 UTC │
	│ addons  │ addons-164332 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:25 UTC │
	│ addons  │ addons-164332 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:25 UTC │ 29 Sep 25 11:26 UTC │
	│ ip      │ addons-164332 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:26 UTC │ 29 Sep 25 11:26 UTC │
	│ addons  │ addons-164332 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:26 UTC │ 29 Sep 25 11:26 UTC │
	│ addons  │ addons-164332 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:26 UTC │ 29 Sep 25 11:26 UTC │
	│ addons  │ addons-164332 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:26 UTC │ 29 Sep 25 11:26 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-164332                                                                                                                                                                                                                                                                                                                                                                                           │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:26 UTC │ 29 Sep 25 11:26 UTC │
	│ addons  │ addons-164332 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:26 UTC │ 29 Sep 25 11:26 UTC │
	│ ssh     │ addons-164332 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:26 UTC │                     │
	│ ssh     │ addons-164332 ssh cat /opt/local-path-provisioner/pvc-00b22499-75b1-465d-9e54-702d51278b65_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:26 UTC │ 29 Sep 25 11:26 UTC │
	│ addons  │ addons-164332 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:26 UTC │ 29 Sep 25 11:26 UTC │
	│ addons  │ addons-164332 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:26 UTC │ 29 Sep 25 11:26 UTC │
	│ addons  │ addons-164332 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:26 UTC │ 29 Sep 25 11:26 UTC │
	│ ip      │ addons-164332 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-164332        │ jenkins │ v1.37.0 │ 29 Sep 25 11:28 UTC │ 29 Sep 25 11:28 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:22:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:22:41.993550  748845 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:22:41.993784  748845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:22:41.993792  748845 out.go:374] Setting ErrFile to fd 2...
	I0929 11:22:41.993797  748845 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:22:41.993997  748845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
	I0929 11:22:41.994522  748845 out.go:368] Setting JSON to false
	I0929 11:22:41.995397  748845 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14699,"bootTime":1759130263,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:22:41.995498  748845 start.go:140] virtualization: kvm guest
	I0929 11:22:42.029009  748845 out.go:179] * [addons-164332] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:22:42.113407  748845 notify.go:220] Checking for updates...
	I0929 11:22:42.113470  748845 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 11:22:42.186137  748845 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:22:42.328163  748845 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig
	I0929 11:22:42.447607  748845 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube
	I0929 11:22:42.520731  748845 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:22:42.603534  748845 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:22:42.687231  748845 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:22:42.710122  748845 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:22:42.710304  748845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:22:42.766739  748845 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-29 11:22:42.756086191 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:22:42.766861  748845 docker.go:318] overlay module found
	I0929 11:22:42.833304  748845 out.go:179] * Using the docker driver based on user configuration
	I0929 11:22:42.957541  748845 start.go:304] selected driver: docker
	I0929 11:22:42.957571  748845 start.go:924] validating driver "docker" against <nil>
	I0929 11:22:42.957586  748845 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:22:42.958322  748845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:22:43.013304  748845 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:46 SystemTime:2025-09-29 11:22:43.002725042 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:22:43.013492  748845 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:22:43.013698  748845 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:22:43.092471  748845 out.go:179] * Using Docker driver with root privileges
	I0929 11:22:43.168270  748845 cni.go:84] Creating CNI manager for ""
	I0929 11:22:43.168355  748845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:22:43.168375  748845 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 11:22:43.168482  748845 start.go:348] cluster config:
	{Name:addons-164332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-164332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0929 11:22:43.207563  748845 out.go:179] * Starting "addons-164332" primary control-plane node in "addons-164332" cluster
	I0929 11:22:43.304281  748845 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 11:22:43.375022  748845 out.go:179] * Pulling base image v0.0.48 ...
	I0929 11:22:43.459040  748845 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:22:43.459153  748845 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-743952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:22:43.459149  748845 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 11:22:43.459177  748845 cache.go:58] Caching tarball of preloaded images
	I0929 11:22:43.459434  748845 preload.go:172] Found /home/jenkins/minikube-integration/21655-743952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 11:22:43.459447  748845 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 11:22:43.459868  748845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/config.json ...
	I0929 11:22:43.459894  748845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/config.json: {Name:mk40610318371c6051d16f3b3a6fef129cacf95b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:22:43.477212  748845 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 11:22:43.477362  748845 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 11:22:43.477385  748845 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 11:22:43.477395  748845 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 11:22:43.477408  748845 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 11:22:43.477416  748845 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 11:22:56.281569  748845 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 11:22:56.281606  748845 cache.go:232] Successfully downloaded all kic artifacts
	I0929 11:22:56.281644  748845 start.go:360] acquireMachinesLock for addons-164332: {Name:mkfe70ba2020ec98e364c3b566508bea62626952 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 11:22:56.281747  748845 start.go:364] duration metric: took 82.757µs to acquireMachinesLock for "addons-164332"
	I0929 11:22:56.281774  748845 start.go:93] Provisioning new machine with config: &{Name:addons-164332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-164332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:22:56.281849  748845 start.go:125] createHost starting for "" (driver="docker")
	I0929 11:22:56.354313  748845 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 11:22:56.354649  748845 start.go:159] libmachine.API.Create for "addons-164332" (driver="docker")
	I0929 11:22:56.354689  748845 client.go:168] LocalClient.Create starting
	I0929 11:22:56.354853  748845 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21655-743952/.minikube/certs/ca.pem
	I0929 11:22:56.823109  748845 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21655-743952/.minikube/certs/cert.pem
	I0929 11:22:57.125165  748845 cli_runner.go:164] Run: docker network inspect addons-164332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 11:22:57.141709  748845 cli_runner.go:211] docker network inspect addons-164332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 11:22:57.141813  748845 network_create.go:284] running [docker network inspect addons-164332] to gather additional debugging logs...
	I0929 11:22:57.141841  748845 cli_runner.go:164] Run: docker network inspect addons-164332
	W0929 11:22:57.158022  748845 cli_runner.go:211] docker network inspect addons-164332 returned with exit code 1
	I0929 11:22:57.158060  748845 network_create.go:287] error running [docker network inspect addons-164332]: docker network inspect addons-164332: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-164332 not found
	I0929 11:22:57.158078  748845 network_create.go:289] output of [docker network inspect addons-164332]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-164332 not found
	
	** /stderr **
	I0929 11:22:57.158195  748845 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 11:22:57.175794  748845 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a166e0}
	I0929 11:22:57.175835  748845 network_create.go:124] attempt to create docker network addons-164332 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 11:22:57.175885  748845 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-164332 addons-164332
	I0929 11:22:57.229334  748845 network_create.go:108] docker network addons-164332 192.168.49.0/24 created
	I0929 11:22:57.229368  748845 kic.go:121] calculated static IP "192.168.49.2" for the "addons-164332" container
	I0929 11:22:57.229428  748845 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 11:22:57.246072  748845 cli_runner.go:164] Run: docker volume create addons-164332 --label name.minikube.sigs.k8s.io=addons-164332 --label created_by.minikube.sigs.k8s.io=true
	I0929 11:22:57.263902  748845 oci.go:103] Successfully created a docker volume addons-164332
	I0929 11:22:57.263996  748845 cli_runner.go:164] Run: docker run --rm --name addons-164332-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-164332 --entrypoint /usr/bin/test -v addons-164332:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 11:23:00.352518  748845 cli_runner.go:217] Completed: docker run --rm --name addons-164332-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-164332 --entrypoint /usr/bin/test -v addons-164332:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (3.088469342s)
	I0929 11:23:00.352553  748845 oci.go:107] Successfully prepared a docker volume addons-164332
	I0929 11:23:00.352573  748845 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:23:00.352599  748845 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 11:23:00.352656  748845 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21655-743952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-164332:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 11:23:04.445418  748845 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21655-743952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-164332:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.092714143s)
	I0929 11:23:04.445456  748845 kic.go:203] duration metric: took 4.092852038s to extract preloaded images to volume ...
	W0929 11:23:04.445556  748845 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 11:23:04.445593  748845 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 11:23:04.445646  748845 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 11:23:04.498122  748845 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-164332 --name addons-164332 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-164332 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-164332 --network addons-164332 --ip 192.168.49.2 --volume addons-164332:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 11:23:04.750945  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Running}}
	I0929 11:23:04.768314  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:04.784912  748845 cli_runner.go:164] Run: docker exec addons-164332 stat /var/lib/dpkg/alternatives/iptables
	I0929 11:23:04.831409  748845 oci.go:144] the created container "addons-164332" has a running status.
	I0929 11:23:04.831444  748845 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa...
	I0929 11:23:05.083398  748845 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 11:23:05.110084  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:05.128274  748845 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 11:23:05.128294  748845 kic_runner.go:114] Args: [docker exec --privileged addons-164332 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 11:23:05.169833  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:05.189455  748845 machine.go:93] provisionDockerMachine start ...
	I0929 11:23:05.189562  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:05.208196  748845 main.go:141] libmachine: Using SSH client type: native
	I0929 11:23:05.208500  748845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I0929 11:23:05.208519  748845 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 11:23:05.342626  748845 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-164332
	
	I0929 11:23:05.342660  748845 ubuntu.go:182] provisioning hostname "addons-164332"
	I0929 11:23:05.342715  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:05.360974  748845 main.go:141] libmachine: Using SSH client type: native
	I0929 11:23:05.361195  748845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I0929 11:23:05.361210  748845 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-164332 && echo "addons-164332" | sudo tee /etc/hostname
	I0929 11:23:05.506027  748845 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-164332
	
	I0929 11:23:05.506120  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:05.523488  748845 main.go:141] libmachine: Using SSH client type: native
	I0929 11:23:05.523730  748845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I0929 11:23:05.523746  748845 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-164332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-164332/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-164332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 11:23:05.657217  748845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 11:23:05.657257  748845 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21655-743952/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-743952/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-743952/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-743952/.minikube}
	I0929 11:23:05.657313  748845 ubuntu.go:190] setting up certificates
	I0929 11:23:05.657323  748845 provision.go:84] configureAuth start
	I0929 11:23:05.657385  748845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-164332
	I0929 11:23:05.674017  748845 provision.go:143] copyHostCerts
	I0929 11:23:05.674093  748845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-743952/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-743952/.minikube/ca.pem (1078 bytes)
	I0929 11:23:05.674203  748845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-743952/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-743952/.minikube/cert.pem (1123 bytes)
	I0929 11:23:05.674263  748845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-743952/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-743952/.minikube/key.pem (1675 bytes)
	I0929 11:23:05.674315  748845 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-743952/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-743952/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-743952/.minikube/certs/ca-key.pem org=jenkins.addons-164332 san=[127.0.0.1 192.168.49.2 addons-164332 localhost minikube]
	I0929 11:23:05.789952  748845 provision.go:177] copyRemoteCerts
	I0929 11:23:05.790024  748845 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 11:23:05.790070  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:05.807197  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:05.903438  748845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-743952/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 11:23:05.929136  748845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-743952/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 11:23:05.952872  748845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-743952/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 11:23:05.976180  748845 provision.go:87] duration metric: took 318.840259ms to configureAuth
	I0929 11:23:05.976210  748845 ubuntu.go:206] setting minikube options for container-runtime
	I0929 11:23:05.976423  748845 config.go:182] Loaded profile config "addons-164332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:23:05.976545  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:05.993193  748845 main.go:141] libmachine: Using SSH client type: native
	I0929 11:23:05.993468  748845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I0929 11:23:05.993494  748845 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 11:23:06.227729  748845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 11:23:06.227756  748845 machine.go:96] duration metric: took 1.038277073s to provisionDockerMachine
	I0929 11:23:06.227768  748845 client.go:171] duration metric: took 9.873073648s to LocalClient.Create
	I0929 11:23:06.227790  748845 start.go:167] duration metric: took 9.873144372s to libmachine.API.Create "addons-164332"
	I0929 11:23:06.227799  748845 start.go:293] postStartSetup for "addons-164332" (driver="docker")
	I0929 11:23:06.227812  748845 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 11:23:06.227876  748845 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 11:23:06.227924  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:06.245786  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:06.344142  748845 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 11:23:06.347811  748845 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 11:23:06.347847  748845 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 11:23:06.347859  748845 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 11:23:06.347868  748845 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 11:23:06.347881  748845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-743952/.minikube/addons for local assets ...
	I0929 11:23:06.347952  748845 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-743952/.minikube/files for local assets ...
	I0929 11:23:06.348001  748845 start.go:296] duration metric: took 120.194541ms for postStartSetup
	I0929 11:23:06.348319  748845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-164332
	I0929 11:23:06.366690  748845 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/config.json ...
	I0929 11:23:06.367031  748845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:23:06.367081  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:06.383591  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:06.475175  748845 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 11:23:06.479609  748845 start.go:128] duration metric: took 10.197742326s to createHost
	I0929 11:23:06.479638  748845 start.go:83] releasing machines lock for "addons-164332", held for 10.197878624s
	I0929 11:23:06.479703  748845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-164332
	I0929 11:23:06.496937  748845 ssh_runner.go:195] Run: cat /version.json
	I0929 11:23:06.497004  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:06.497065  748845 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 11:23:06.497145  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:06.514958  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:06.515344  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:06.606288  748845 ssh_runner.go:195] Run: systemctl --version
	I0929 11:23:06.680982  748845 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 11:23:06.821533  748845 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 11:23:06.826370  748845 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:23:06.848276  748845 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 11:23:06.848365  748845 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 11:23:06.878282  748845 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 11:23:06.878305  748845 start.go:495] detecting cgroup driver to use...
	I0929 11:23:06.878339  748845 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 11:23:06.878395  748845 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 11:23:06.893462  748845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 11:23:06.904407  748845 docker.go:218] disabling cri-docker service (if available) ...
	I0929 11:23:06.904479  748845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 11:23:06.918417  748845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 11:23:06.932762  748845 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 11:23:06.998939  748845 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 11:23:07.065052  748845 docker.go:234] disabling docker service ...
	I0929 11:23:07.065131  748845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 11:23:07.083818  748845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 11:23:07.095419  748845 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 11:23:07.159016  748845 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 11:23:07.301943  748845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 11:23:07.313886  748845 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 11:23:07.330348  748845 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 11:23:07.330412  748845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:23:07.342399  748845 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 11:23:07.342470  748845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:23:07.352498  748845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:23:07.362551  748845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:23:07.372971  748845 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 11:23:07.382556  748845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:23:07.392683  748845 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:23:07.409272  748845 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 11:23:07.419218  748845 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 11:23:07.427531  748845 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 11:23:07.435913  748845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:23:07.498760  748845 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 11:23:07.590596  748845 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 11:23:07.590688  748845 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 11:23:07.594724  748845 start.go:563] Will wait 60s for crictl version
	I0929 11:23:07.594784  748845 ssh_runner.go:195] Run: which crictl
	I0929 11:23:07.598268  748845 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 11:23:07.633595  748845 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 11:23:07.633685  748845 ssh_runner.go:195] Run: crio --version
	I0929 11:23:07.668667  748845 ssh_runner.go:195] Run: crio --version
	I0929 11:23:07.704226  748845 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 11:23:07.705250  748845 cli_runner.go:164] Run: docker network inspect addons-164332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 11:23:07.721708  748845 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 11:23:07.725710  748845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:23:07.737664  748845 kubeadm.go:875] updating cluster {Name:addons-164332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-164332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 11:23:07.737769  748845 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:23:07.737814  748845 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:23:07.805318  748845 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:23:07.805347  748845 crio.go:433] Images already preloaded, skipping extraction
	I0929 11:23:07.805411  748845 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 11:23:07.839262  748845 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 11:23:07.839284  748845 cache_images.go:85] Images are preloaded, skipping loading
	I0929 11:23:07.839292  748845 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0929 11:23:07.839379  748845 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-164332 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-164332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 11:23:07.839441  748845 ssh_runner.go:195] Run: crio config
	I0929 11:23:07.880599  748845 cni.go:84] Creating CNI manager for ""
	I0929 11:23:07.880626  748845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:23:07.880639  748845 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 11:23:07.880660  748845 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-164332 NodeName:addons-164332 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 11:23:07.880808  748845 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-164332"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 11:23:07.880879  748845 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 11:23:07.890702  748845 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 11:23:07.890767  748845 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 11:23:07.899886  748845 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 11:23:07.917943  748845 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 11:23:07.937798  748845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0929 11:23:07.955474  748845 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 11:23:07.958942  748845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 11:23:07.969756  748845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:23:08.035068  748845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:23:08.056045  748845 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332 for IP: 192.168.49.2
	I0929 11:23:08.056069  748845 certs.go:194] generating shared ca certs ...
	I0929 11:23:08.056091  748845 certs.go:226] acquiring lock for ca certs: {Name:mk816d55a8e9da79e9367f831be9e08bcdb7d37f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:23:08.056222  748845 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-743952/.minikube/ca.key
	I0929 11:23:08.216919  748845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-743952/.minikube/ca.crt ...
	I0929 11:23:08.216956  748845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-743952/.minikube/ca.crt: {Name:mkf3ab6d141e97a575d39e818226f21606fc8219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:23:08.217216  748845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-743952/.minikube/ca.key ...
	I0929 11:23:08.217236  748845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-743952/.minikube/ca.key: {Name:mka09bb83c3d5e8b362fbe1c29f4b6d70958d973 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:23:08.217364  748845 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-743952/.minikube/proxy-client-ca.key
	I0929 11:23:08.375914  748845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-743952/.minikube/proxy-client-ca.crt ...
	I0929 11:23:08.375947  748845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-743952/.minikube/proxy-client-ca.crt: {Name:mkb9668cf353e0c2e264bb09512f3b7c8314b92d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:23:08.384128  748845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-743952/.minikube/proxy-client-ca.key ...
	I0929 11:23:08.384155  748845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-743952/.minikube/proxy-client-ca.key: {Name:mk1cacdf53986532b2c4f4e6bb7a541ff1bdcb53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:23:08.384288  748845 certs.go:256] generating profile certs ...
	I0929 11:23:08.384369  748845 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.key
	I0929 11:23:08.384385  748845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt with IP's: []
	I0929 11:23:08.486656  748845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt ...
	I0929 11:23:08.486690  748845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: {Name:mk074e16b914f202058a0a308818104e2765c509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:23:08.486901  748845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.key ...
	I0929 11:23:08.486918  748845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.key: {Name:mk0d0693c506d4e73de291ac001a3d70cdc3d569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:23:08.487062  748845 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/apiserver.key.882a25e9
	I0929 11:23:08.487085  748845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/apiserver.crt.882a25e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 11:23:08.595229  748845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/apiserver.crt.882a25e9 ...
	I0929 11:23:08.595261  748845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/apiserver.crt.882a25e9: {Name:mk1fe299db312cd0ff5c8a9772c2038a6f2a9afa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:23:08.595464  748845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/apiserver.key.882a25e9 ...
	I0929 11:23:08.595484  748845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/apiserver.key.882a25e9: {Name:mk5fff22147b3ddb9ca426eea6bc76d876775fd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:23:08.595608  748845 certs.go:381] copying /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/apiserver.crt.882a25e9 -> /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/apiserver.crt
	I0929 11:23:08.595719  748845 certs.go:385] copying /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/apiserver.key.882a25e9 -> /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/apiserver.key
	I0929 11:23:08.595797  748845 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/proxy-client.key
	I0929 11:23:08.595821  748845 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/proxy-client.crt with IP's: []
	I0929 11:23:08.673838  748845 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/proxy-client.crt ...
	I0929 11:23:08.673877  748845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/proxy-client.crt: {Name:mk34f745dc76de2eb418aaf47c496e26d41072ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:23:08.674737  748845 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/proxy-client.key ...
	I0929 11:23:08.674763  748845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/proxy-client.key: {Name:mk51ce4c26eb0f167c23b9d90e71cefb80609240 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:23:08.675024  748845 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-743952/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 11:23:08.675067  748845 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-743952/.minikube/certs/ca.pem (1078 bytes)
	I0929 11:23:08.675104  748845 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-743952/.minikube/certs/cert.pem (1123 bytes)
	I0929 11:23:08.675132  748845 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-743952/.minikube/certs/key.pem (1675 bytes)
	I0929 11:23:08.675720  748845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-743952/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 11:23:08.701406  748845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-743952/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 11:23:08.726710  748845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-743952/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 11:23:08.751542  748845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-743952/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 11:23:08.776542  748845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 11:23:08.801681  748845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 11:23:08.825301  748845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 11:23:08.849066  748845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 11:23:08.872475  748845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-743952/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 11:23:08.899506  748845 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 11:23:08.917826  748845 ssh_runner.go:195] Run: openssl version
	I0929 11:23:08.923205  748845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 11:23:08.935069  748845 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:23:08.938731  748845 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:23 /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:23:08.938795  748845 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 11:23:08.945783  748845 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 11:23:08.955991  748845 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 11:23:08.959577  748845 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 11:23:08.959639  748845 kubeadm.go:392] StartCluster: {Name:addons-164332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-164332 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:23:08.959733  748845 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 11:23:08.959782  748845 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 11:23:08.995852  748845 cri.go:89] found id: ""
	I0929 11:23:08.995923  748845 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 11:23:09.005601  748845 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 11:23:09.014876  748845 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 11:23:09.014930  748845 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 11:23:09.024056  748845 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 11:23:09.024074  748845 kubeadm.go:157] found existing configuration files:
	
	I0929 11:23:09.024119  748845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 11:23:09.033119  748845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 11:23:09.033196  748845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 11:23:09.042617  748845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 11:23:09.051721  748845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 11:23:09.051769  748845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 11:23:09.060125  748845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 11:23:09.068564  748845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 11:23:09.068623  748845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 11:23:09.076912  748845 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 11:23:09.085515  748845 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 11:23:09.085564  748845 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 11:23:09.093817  748845 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 11:23:09.132015  748845 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 11:23:09.132093  748845 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 11:23:09.146481  748845 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 11:23:09.146572  748845 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 11:23:09.146613  748845 kubeadm.go:310] OS: Linux
	I0929 11:23:09.146666  748845 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 11:23:09.146722  748845 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 11:23:09.146779  748845 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 11:23:09.146834  748845 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 11:23:09.146889  748845 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 11:23:09.146943  748845 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 11:23:09.147017  748845 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 11:23:09.147087  748845 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 11:23:09.198432  748845 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 11:23:09.198575  748845 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 11:23:09.198736  748845 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 11:23:09.205057  748845 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 11:23:09.206931  748845 out.go:252]   - Generating certificates and keys ...
	I0929 11:23:09.207020  748845 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 11:23:09.207129  748845 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 11:23:09.282873  748845 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 11:23:09.472515  748845 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 11:23:09.716842  748845 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 11:23:10.144364  748845 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 11:23:10.519769  748845 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 11:23:10.519912  748845 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-164332 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 11:23:10.800281  748845 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 11:23:10.800495  748845 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-164332 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 11:23:10.966975  748845 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 11:23:11.217221  748845 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 11:23:11.497302  748845 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 11:23:11.497379  748845 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 11:23:11.728199  748845 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 11:23:12.118856  748845 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 11:23:12.303363  748845 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 11:23:12.594278  748845 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 11:23:13.125242  748845 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 11:23:13.125754  748845 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 11:23:13.129380  748845 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 11:23:13.130729  748845 out.go:252]   - Booting up control plane ...
	I0929 11:23:13.130854  748845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 11:23:13.130985  748845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 11:23:13.131463  748845 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 11:23:13.140607  748845 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 11:23:13.140723  748845 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 11:23:13.146751  748845 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 11:23:13.147185  748845 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 11:23:13.147261  748845 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 11:23:13.223740  748845 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 11:23:13.223891  748845 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 11:23:14.225283  748845 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001755525s
	I0929 11:23:14.228454  748845 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 11:23:14.228571  748845 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 11:23:14.228742  748845 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 11:23:14.228879  748845 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 11:23:15.033350  748845 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 804.823394ms
	I0929 11:23:15.634951  748845 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 1.406515672s
	I0929 11:23:17.230130  748845 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 3.001564142s
	I0929 11:23:17.240551  748845 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 11:23:17.250363  748845 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 11:23:17.261317  748845 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 11:23:17.261598  748845 kubeadm.go:310] [mark-control-plane] Marking the node addons-164332 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 11:23:17.269333  748845 kubeadm.go:310] [bootstrap-token] Using token: w6ccz9.fe04yr19bnro6ecs
	I0929 11:23:17.270882  748845 out.go:252]   - Configuring RBAC rules ...
	I0929 11:23:17.271047  748845 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 11:23:17.273623  748845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 11:23:17.278363  748845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 11:23:17.281604  748845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 11:23:17.283912  748845 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 11:23:17.286244  748845 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 11:23:17.635526  748845 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 11:23:18.051562  748845 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 11:23:18.635025  748845 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 11:23:18.635914  748845 kubeadm.go:310] 
	I0929 11:23:18.636015  748845 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 11:23:18.636066  748845 kubeadm.go:310] 
	I0929 11:23:18.636148  748845 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 11:23:18.636155  748845 kubeadm.go:310] 
	I0929 11:23:18.636185  748845 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 11:23:18.636240  748845 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 11:23:18.636298  748845 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 11:23:18.636304  748845 kubeadm.go:310] 
	I0929 11:23:18.636374  748845 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 11:23:18.636386  748845 kubeadm.go:310] 
	I0929 11:23:18.636453  748845 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 11:23:18.636463  748845 kubeadm.go:310] 
	I0929 11:23:18.636551  748845 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 11:23:18.636658  748845 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 11:23:18.636756  748845 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 11:23:18.636766  748845 kubeadm.go:310] 
	I0929 11:23:18.636835  748845 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 11:23:18.636904  748845 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 11:23:18.636912  748845 kubeadm.go:310] 
	I0929 11:23:18.637016  748845 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w6ccz9.fe04yr19bnro6ecs \
	I0929 11:23:18.637116  748845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd2b8a6173bbb72aac23ddd3f242b427db5008ef8522e6c83ff88efbdff474c4 \
	I0929 11:23:18.637137  748845 kubeadm.go:310] 	--control-plane 
	I0929 11:23:18.637140  748845 kubeadm.go:310] 
	I0929 11:23:18.637210  748845 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 11:23:18.637216  748845 kubeadm.go:310] 
	I0929 11:23:18.637290  748845 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w6ccz9.fe04yr19bnro6ecs \
	I0929 11:23:18.637383  748845 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dd2b8a6173bbb72aac23ddd3f242b427db5008ef8522e6c83ff88efbdff474c4 
	I0929 11:23:18.640084  748845 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 11:23:18.640204  748845 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 11:23:18.640238  748845 cni.go:84] Creating CNI manager for ""
	I0929 11:23:18.640248  748845 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:23:18.642377  748845 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 11:23:18.643507  748845 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 11:23:18.647803  748845 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 11:23:18.647821  748845 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 11:23:18.667244  748845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 11:23:18.879147  748845 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 11:23:18.879275  748845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:23:18.879347  748845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-164332 minikube.k8s.io/updated_at=2025_09_29T11_23_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf minikube.k8s.io/name=addons-164332 minikube.k8s.io/primary=true
	I0929 11:23:18.957030  748845 ops.go:34] apiserver oom_adj: -16
	I0929 11:23:18.957062  748845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:23:19.457737  748845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:23:19.957834  748845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:23:20.457326  748845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:23:20.957648  748845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:23:21.458180  748845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:23:21.957262  748845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:23:22.457553  748845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:23:22.957399  748845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:23:23.457301  748845 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 11:23:23.521741  748845 kubeadm.go:1105] duration metric: took 4.642530403s to wait for elevateKubeSystemPrivileges
	I0929 11:23:23.521780  748845 kubeadm.go:394] duration metric: took 14.56215341s to StartCluster
	I0929 11:23:23.521806  748845 settings.go:142] acquiring lock: {Name:mkf27947df09a9d52dac2b9df11acd0c406e7508 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:23:23.521953  748845 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-743952/kubeconfig
	I0929 11:23:23.522559  748845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-743952/kubeconfig: {Name:mkd5c854b22e6a2457f9648ee0d6a6dde3aa0837 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 11:23:23.522811  748845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 11:23:23.522852  748845 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 11:23:23.522919  748845 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 11:23:23.523080  748845 config.go:182] Loaded profile config "addons-164332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:23:23.523090  748845 addons.go:69] Setting gcp-auth=true in profile "addons-164332"
	I0929 11:23:23.523105  748845 addons.go:69] Setting yakd=true in profile "addons-164332"
	I0929 11:23:23.523127  748845 addons.go:238] Setting addon yakd=true in "addons-164332"
	I0929 11:23:23.523134  748845 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-164332"
	I0929 11:23:23.523138  748845 addons.go:69] Setting registry=true in profile "addons-164332"
	I0929 11:23:23.523153  748845 addons.go:238] Setting addon registry=true in "addons-164332"
	I0929 11:23:23.523147  748845 addons.go:69] Setting cloud-spanner=true in profile "addons-164332"
	I0929 11:23:23.523170  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.523186  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.523191  748845 addons.go:238] Setting addon cloud-spanner=true in "addons-164332"
	I0929 11:23:23.523193  748845 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-164332"
	I0929 11:23:23.523221  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.523253  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.523386  748845 addons.go:69] Setting default-storageclass=true in profile "addons-164332"
	I0929 11:23:23.523447  748845 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-164332"
	I0929 11:23:23.523712  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.523740  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.523754  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.523780  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.523806  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.523979  748845 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-164332"
	I0929 11:23:23.524136  748845 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-164332"
	I0929 11:23:23.524407  748845 addons.go:69] Setting registry-creds=true in profile "addons-164332"
	I0929 11:23:23.524425  748845 addons.go:238] Setting addon registry-creds=true in "addons-164332"
	I0929 11:23:23.524450  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.524502  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.524741  748845 addons.go:69] Setting storage-provisioner=true in profile "addons-164332"
	I0929 11:23:23.524780  748845 addons.go:238] Setting addon storage-provisioner=true in "addons-164332"
	I0929 11:23:23.524843  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.523128  748845 mustload.go:65] Loading cluster: addons-164332
	I0929 11:23:23.524861  748845 addons.go:69] Setting volcano=true in profile "addons-164332"
	I0929 11:23:23.524888  748845 addons.go:238] Setting addon volcano=true in "addons-164332"
	I0929 11:23:23.524916  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.525425  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.525593  748845 out.go:179] * Verifying Kubernetes components...
	I0929 11:23:23.525793  748845 addons.go:69] Setting inspektor-gadget=true in profile "addons-164332"
	I0929 11:23:23.525815  748845 addons.go:238] Setting addon inspektor-gadget=true in "addons-164332"
	I0929 11:23:23.525843  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.526348  748845 addons.go:69] Setting ingress=true in profile "addons-164332"
	I0929 11:23:23.526375  748845 addons.go:238] Setting addon ingress=true in "addons-164332"
	I0929 11:23:23.526420  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.526782  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.527018  748845 addons.go:69] Setting metrics-server=true in profile "addons-164332"
	I0929 11:23:23.527042  748845 addons.go:238] Setting addon metrics-server=true in "addons-164332"
	I0929 11:23:23.527071  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.527241  748845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 11:23:23.527539  748845 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-164332"
	I0929 11:23:23.527562  748845 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-164332"
	I0929 11:23:23.527591  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.528103  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.523129  748845 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-164332"
	I0929 11:23:23.537467  748845 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-164332"
	I0929 11:23:23.537524  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.538437  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.539209  748845 addons.go:69] Setting ingress-dns=true in profile "addons-164332"
	I0929 11:23:23.539285  748845 addons.go:238] Setting addon ingress-dns=true in "addons-164332"
	I0929 11:23:23.539372  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.539370  748845 addons.go:69] Setting volumesnapshots=true in profile "addons-164332"
	I0929 11:23:23.539395  748845 addons.go:238] Setting addon volumesnapshots=true in "addons-164332"
	I0929 11:23:23.539431  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.540081  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.541551  748845 config.go:182] Loaded profile config "addons-164332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:23:23.542463  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.542343  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.543417  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.539241  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.550494  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.552752  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.580953  748845 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 11:23:23.582330  748845 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 11:23:23.583483  748845 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 11:23:23.584722  748845 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 11:23:23.587011  748845 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 11:23:23.588038  748845 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 11:23:23.588059  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 11:23:23.588126  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.589400  748845 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 11:23:23.590501  748845 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 11:23:23.591541  748845 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 11:23:23.593328  748845 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 11:23:23.595515  748845 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 11:23:23.595537  748845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 11:23:23.595603  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.600134  748845 addons.go:238] Setting addon default-storageclass=true in "addons-164332"
	I0929 11:23:23.600187  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.600695  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.603361  748845 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 11:23:23.604637  748845 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 11:23:23.604758  748845 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 11:23:23.605932  748845 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 11:23:23.605952  748845 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 11:23:23.606039  748845 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:23:23.606070  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 11:23:23.606163  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.606486  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.608092  748845 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-164332"
	I0929 11:23:23.610123  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.612270  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:23.614833  748845 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 11:23:23.616882  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:23.617244  748845 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 11:23:23.617297  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 11:23:23.617376  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	W0929 11:23:23.626914  748845 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 11:23:23.627657  748845 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 11:23:23.628104  748845 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 11:23:23.629272  748845 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 11:23:23.629294  748845 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 11:23:23.629369  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.629723  748845 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:23:23.629745  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 11:23:23.629797  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.629987  748845 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 11:23:23.631228  748845 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 11:23:23.631247  748845 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 11:23:23.631304  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.654661  748845 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 11:23:23.655176  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.655872  748845 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 11:23:23.655894  748845 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 11:23:23.656003  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.657085  748845 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 11:23:23.660944  748845 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 11:23:23.662781  748845 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:23:23.663138  748845 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:23:23.663162  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 11:23:23.663233  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.665894  748845 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:23:23.670089  748845 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 11:23:23.670308  748845 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:23:23.670324  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 11:23:23.670386  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.670605  748845 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 11:23:23.670730  748845 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 11:23:23.672126  748845 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:23:23.672216  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 11:23:23.672218  748845 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:23:23.672351  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 11:23:23.672393  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.672160  748845 out.go:179]   - Using image docker.io/busybox:stable
	I0929 11:23:23.672684  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.674364  748845 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:23:23.674385  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 11:23:23.674447  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.683678  748845 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 11:23:23.686475  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.687178  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.690742  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.706142  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.717357  748845 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 11:23:23.717383  748845 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 11:23:23.717499  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:23.721068  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.730297  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.740581  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.741041  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.741186  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.745564  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.750669  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.751433  748845 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 11:23:23.756264  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.759170  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.773629  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:23.872603  748845 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 11:23:23.872632  748845 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 11:23:23.873319  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 11:23:23.875800  748845 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 11:23:23.875829  748845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 11:23:23.890801  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 11:23:23.922337  748845 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 11:23:23.922371  748845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 11:23:23.928533  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 11:23:23.934719  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 11:23:23.944138  748845 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 11:23:23.944231  748845 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 11:23:23.945098  748845 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 11:23:23.945115  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 11:23:23.947077  748845 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 11:23:23.947132  748845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 11:23:23.955421  748845 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:23:23.955446  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 11:23:23.959817  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 11:23:23.962378  748845 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 11:23:23.962440  748845 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 11:23:23.969548  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 11:23:23.975524  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 11:23:23.975897  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 11:23:23.983029  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 11:23:23.988783  748845 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 11:23:23.988808  748845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 11:23:23.999214  748845 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 11:23:23.999240  748845 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 11:23:24.011141  748845 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 11:23:24.011181  748845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 11:23:24.016343  748845 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 11:23:24.016444  748845 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 11:23:24.039943  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:23:24.060846  748845 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:23:24.060872  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 11:23:24.062296  748845 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 11:23:24.062321  748845 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 11:23:24.065114  748845 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:23:24.065133  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 11:23:24.086787  748845 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 11:23:24.086817  748845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 11:23:24.121410  748845 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:23:24.121529  748845 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 11:23:24.150145  748845 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 11:23:24.150253  748845 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 11:23:24.177800  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 11:23:24.190271  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 11:23:24.203982  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 11:23:24.214167  748845 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 11:23:24.214215  748845 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 11:23:24.221644  748845 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 11:23:24.224132  748845 node_ready.go:35] waiting up to 6m0s for node "addons-164332" to be "Ready" ...
	I0929 11:23:24.265899  748845 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:23:24.265932  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 11:23:24.321783  748845 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 11:23:24.321817  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 11:23:24.337001  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 11:23:24.386298  748845 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 11:23:24.386406  748845 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 11:23:24.446535  748845 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 11:23:24.446561  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 11:23:24.511656  748845 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 11:23:24.511682  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 11:23:24.584857  748845 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:23:24.584936  748845 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 11:23:24.648764  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 11:23:24.752850  748845 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-164332" context rescaled to 1 replicas
	I0929 11:23:25.178370  748845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.218510524s)
	I0929 11:23:25.178420  748845 addons.go:479] Verifying addon ingress=true in "addons-164332"
	I0929 11:23:25.178468  748845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.2088737s)
	I0929 11:23:25.178516  748845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.202960163s)
	I0929 11:23:25.178611  748845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.202687192s)
	I0929 11:23:25.178673  748845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.195615796s)
	I0929 11:23:25.178950  748845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.138894178s)
	W0929 11:23:25.178996  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:25.179024  748845 retry.go:31] will retry after 155.461279ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:25.179067  748845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.001236769s)
	I0929 11:23:25.179084  748845 addons.go:479] Verifying addon registry=true in "addons-164332"
	I0929 11:23:25.179330  748845 addons.go:479] Verifying addon metrics-server=true in "addons-164332"
	I0929 11:23:25.181146  748845 out.go:179] * Verifying ingress addon...
	I0929 11:23:25.181153  748845 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-164332 service yakd-dashboard -n yakd-dashboard
	
	I0929 11:23:25.181146  748845 out.go:179] * Verifying registry addon...
	I0929 11:23:25.183172  748845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 11:23:25.183190  748845 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0929 11:23:25.185153  748845 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0929 11:23:25.185676  748845 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 11:23:25.185746  748845 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 11:23:25.185769  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:25.334638  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:23:25.688277  748845 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 11:23:25.688306  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:25.688454  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:25.702466  748845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.365416218s)
	W0929 11:23:25.702524  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:23:25.702549  748845 retry.go:31] will retry after 166.605379ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 11:23:25.702767  748845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.053967792s)
	I0929 11:23:25.702791  748845 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-164332"
	I0929 11:23:25.708474  748845 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 11:23:25.710391  748845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 11:23:25.729759  748845 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 11:23:25.729795  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:25.869304  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0929 11:23:25.992055  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:25.992090  748845 retry.go:31] will retry after 454.267114ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:26.186996  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:26.187074  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:26.213794  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:26.226574  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:26.447008  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:23:26.686640  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:26.686824  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:26.713287  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:27.186688  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:27.186783  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:27.213605  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:27.686895  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:27.687119  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:27.714036  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:28.186506  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:28.186595  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:28.215551  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:28.227416  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:28.357156  748845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.487803644s)
	I0929 11:23:28.357242  748845 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.910194432s)
	W0929 11:23:28.357291  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:28.357323  748845 retry.go:31] will retry after 307.597975ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:28.666018  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:23:28.686976  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:28.687100  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:28.714145  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:29.187135  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:29.187447  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:29.214703  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:29.227845  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:29.227884  748845 retry.go:31] will retry after 1.228466755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:29.687039  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:29.687200  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:29.714032  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:30.187365  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:30.187508  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:30.214478  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:30.457585  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:23:30.687715  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:30.687861  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:30.714399  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:30.727239  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	W0929 11:23:31.022693  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:31.022731  748845 retry.go:31] will retry after 1.638161819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:31.187223  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:31.187294  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:31.213835  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:31.219184  748845 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 11:23:31.219265  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:31.237754  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:31.345111  748845 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 11:23:31.363263  748845 addons.go:238] Setting addon gcp-auth=true in "addons-164332"
	I0929 11:23:31.363320  748845 host.go:66] Checking if "addons-164332" exists ...
	I0929 11:23:31.363715  748845 cli_runner.go:164] Run: docker container inspect addons-164332 --format={{.State.Status}}
	I0929 11:23:31.381014  748845 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 11:23:31.381062  748845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164332
	I0929 11:23:31.398355  748845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/addons-164332/id_rsa Username:docker}
	I0929 11:23:31.490410  748845 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 11:23:31.491444  748845 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 11:23:31.492274  748845 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 11:23:31.492287  748845 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 11:23:31.511004  748845 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 11:23:31.511035  748845 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 11:23:31.528862  748845 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:23:31.528883  748845 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 11:23:31.546913  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 11:23:31.687073  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:31.687267  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:31.713930  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:31.842644  748845 addons.go:479] Verifying addon gcp-auth=true in "addons-164332"
	I0929 11:23:31.843785  748845 out.go:179] * Verifying gcp-auth addon...
	I0929 11:23:31.845389  748845 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 11:23:31.847857  748845 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 11:23:31.847875  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:32.185903  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:32.186009  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:32.213153  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:32.348897  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:32.661219  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:23:32.685852  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:32.686055  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:32.713714  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:32.727669  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:32.849735  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:33.186897  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:33.186932  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0929 11:23:33.195278  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:33.195311  748845 retry.go:31] will retry after 2.22748032s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:33.214095  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:33.348458  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:33.686467  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:33.686670  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:33.713788  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:33.848658  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:34.186631  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:34.186822  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:34.212938  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:34.348510  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:34.686444  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:34.686504  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:34.713486  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:34.848631  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:35.186707  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:35.186852  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:35.212853  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:35.226797  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:35.348462  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:35.423575  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:23:35.686777  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:35.686819  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:35.713378  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:35.848877  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:23:35.949140  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:35.949170  748845 retry.go:31] will retry after 1.782925361s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:36.185901  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:36.186081  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:36.213539  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:36.348887  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:36.687325  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:36.687383  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:36.714026  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:36.848228  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:37.186862  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:37.186923  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:37.213591  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:37.226916  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:37.348833  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:37.686900  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:37.687029  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:37.713780  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:37.732683  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:23:37.848997  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:38.185731  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:38.185955  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:38.213015  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:38.276594  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:38.276629  748845 retry.go:31] will retry after 4.79818611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:38.348194  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:38.686267  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:38.686401  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:38.713984  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:38.849216  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:39.186606  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:39.186732  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:39.213393  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:39.227071  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:39.348773  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:39.686869  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:39.686947  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:39.713639  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:39.849108  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:40.186031  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:40.186153  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:40.213556  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:40.348228  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:40.686038  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:40.686151  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:40.713537  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:40.848542  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:41.186828  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:41.187006  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:41.213121  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:41.227198  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:41.348899  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:41.686483  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:41.686749  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:41.712975  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:41.848748  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:42.186653  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:42.186775  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:42.213098  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:42.348415  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:42.686402  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:42.686607  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:42.714081  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:42.849181  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:43.075459  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:23:43.187480  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:43.187645  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:43.213716  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:43.348754  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:23:43.611698  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:43.611727  748845 retry.go:31] will retry after 8.080044702s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:43.686282  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:43.686499  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:43.714309  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:43.727336  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:43.849084  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:44.186081  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:44.186113  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:44.213103  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:44.348583  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:44.686452  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:44.686578  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:44.713611  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:44.848431  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:45.186274  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:45.186361  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:45.213249  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:45.348765  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:45.686729  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:45.686891  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:45.713397  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:45.848299  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:46.186207  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:46.186258  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:46.213898  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:46.227461  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:46.349092  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:46.686724  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:46.686777  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:46.713357  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:46.848936  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:47.186403  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:47.186594  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:47.214322  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:47.348603  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:47.686725  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:47.686790  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:47.713388  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:47.848844  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:48.186822  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:48.186879  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:48.213731  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:48.349157  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:48.686123  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:48.686296  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:48.713828  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:48.727171  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:48.849371  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:49.187060  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:49.187110  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:49.213723  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:49.348977  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:49.686456  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:49.686646  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:49.714153  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:49.848464  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:50.186194  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:50.186348  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:50.213463  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:50.348704  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:50.686578  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:50.686723  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:50.712910  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:50.849155  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:51.185930  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:51.186048  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:51.213265  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:51.227369  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:51.347793  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:51.686922  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:51.687015  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:51.691873  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:23:51.713537  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:51.848886  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:52.186122  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:52.186155  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:52.212931  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:52.217929  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:52.217958  748845 retry.go:31] will retry after 9.038362163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:23:52.348597  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:52.685938  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:52.686011  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:52.712767  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:52.848347  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:53.186796  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:53.186991  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:53.212629  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:53.348070  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:53.685847  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:53.686188  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:53.713398  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:53.726717  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:53.848506  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:54.186625  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:54.186741  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:54.213223  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:54.348224  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:54.685952  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:54.686123  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:54.713654  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:54.848577  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:55.186412  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:55.186491  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:55.213548  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:55.348173  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:55.686190  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:55.686227  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:55.713751  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:55.726975  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:55.848634  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:56.186636  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:56.186854  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:56.213059  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:56.347920  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:56.685875  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:56.686064  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:56.713679  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:56.848695  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:57.186995  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:57.187255  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:57.213556  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:57.349006  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:57.686569  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:57.686693  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:57.713246  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:23:57.727727  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:23:57.848501  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:58.186595  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:58.186762  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:58.213205  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:58.348232  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:58.686010  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:58.686126  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:58.713607  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:58.849132  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:59.187077  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:59.187144  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:59.213732  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:59.349026  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:23:59.685833  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:23:59.685876  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:23:59.713196  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:23:59.849019  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:00.186797  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:00.186885  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:00.213156  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:24:00.227342  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:24:00.348794  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:00.686543  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:00.686709  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:00.712683  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:00.848608  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:01.186593  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:01.186759  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:01.212839  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:01.256951  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:24:01.348997  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:01.688116  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:01.688596  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:01.713927  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:24:01.792012  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:24:01.792044  748845 retry.go:31] will retry after 19.284729595s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:24:01.849033  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:02.185870  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:02.186150  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:02.213574  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:02.348668  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:02.686631  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:02.686716  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:02.712829  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:24:02.727019  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:24:02.848754  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:03.186626  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:03.186768  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:03.212865  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:03.348637  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:03.686649  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:03.686762  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:03.712773  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:03.848817  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:04.186852  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:04.186901  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:04.213056  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:04.349092  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:04.686131  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:04.686348  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:04.713896  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 11:24:04.727327  748845 node_ready.go:57] node "addons-164332" has "Ready":"False" status (will retry)
	I0929 11:24:04.849120  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:05.186950  748845 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 11:24:05.186995  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:05.187111  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:05.216800  748845 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 11:24:05.216829  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:05.228050  748845 node_ready.go:49] node "addons-164332" is "Ready"
	I0929 11:24:05.228079  748845 node_ready.go:38] duration metric: took 41.003880276s for node "addons-164332" to be "Ready" ...
	I0929 11:24:05.228096  748845 api_server.go:52] waiting for apiserver process to appear ...
	I0929 11:24:05.228155  748845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:24:05.243189  748845 api_server.go:72] duration metric: took 41.720295628s to wait for apiserver process to appear ...
	I0929 11:24:05.243216  748845 api_server.go:88] waiting for apiserver healthz status ...
	I0929 11:24:05.243240  748845 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 11:24:05.247791  748845 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 11:24:05.248894  748845 api_server.go:141] control plane version: v1.34.0
	I0929 11:24:05.248921  748845 api_server.go:131] duration metric: took 5.69802ms to wait for apiserver health ...
	I0929 11:24:05.248931  748845 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 11:24:05.251919  748845 system_pods.go:59] 20 kube-system pods found
	I0929 11:24:05.251950  748845 system_pods.go:61] "amd-gpu-device-plugin-jx2jk" [1350438a-4e00-4bc2-a74a-245c5429f7f0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 11:24:05.251979  748845 system_pods.go:61] "coredns-66bc5c9577-hp6gw" [d69b696c-ee30-4118-bc4a-a9289f14367e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:24:05.251992  748845 system_pods.go:61] "csi-hostpath-attacher-0" [33abd188-21d5-4afc-81ee-986a2c9abe04] Pending
	I0929 11:24:05.251997  748845 system_pods.go:61] "csi-hostpath-resizer-0" [927d632a-8f21-442e-b7c1-a030a7ea7050] Pending
	I0929 11:24:05.252000  748845 system_pods.go:61] "csi-hostpathplugin-l5tls" [050695f6-8dbc-465b-9cba-12e4a136d556] Pending
	I0929 11:24:05.252004  748845 system_pods.go:61] "etcd-addons-164332" [67876682-ef5d-4028-8dd1-7959385817db] Running
	I0929 11:24:05.252007  748845 system_pods.go:61] "kindnet-tl4rx" [ed627b9b-f9a1-4f39-ae18-8ca1d302a05e] Running
	I0929 11:24:05.252013  748845 system_pods.go:61] "kube-apiserver-addons-164332" [8fe4a6db-a58f-48b8-8155-74aebb9780c8] Running
	I0929 11:24:05.252016  748845 system_pods.go:61] "kube-controller-manager-addons-164332" [9eb8d4a0-ff92-450f-ace1-fa669f0dd834] Running
	I0929 11:24:05.252026  748845 system_pods.go:61] "kube-ingress-dns-minikube" [1a62559c-f780-4a20-a627-d01379b91cce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:24:05.252030  748845 system_pods.go:61] "kube-proxy-s6bp8" [93ff57fd-b9f0-462a-b281-e71410257955] Running
	I0929 11:24:05.252039  748845 system_pods.go:61] "kube-scheduler-addons-164332" [cfcc711b-5539-4c29-8768-f7dab8f35d27] Running
	I0929 11:24:05.252052  748845 system_pods.go:61] "metrics-server-85b7d694d7-8br9j" [8e2ab083-272f-4c43-9dcf-cf2726a7560d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:24:05.252062  748845 system_pods.go:61] "nvidia-device-plugin-daemonset-z46zt" [8c5b5a65-2856-463b-aa22-640067a5e289] Pending
	I0929 11:24:05.252072  748845 system_pods.go:61] "registry-66898fdd98-cmshl" [5677ae02-8e21-4448-b64a-9eb03b4d372f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:24:05.252082  748845 system_pods.go:61] "registry-creds-764b6fb674-dtvb9" [144622f1-53d1-4a96-98e2-04721adc3e65] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:24:05.252089  748845 system_pods.go:61] "registry-proxy-lrxjf" [abfc2247-51ab-4a89-a23b-3eb3f7ebd7f6] Pending
	I0929 11:24:05.252098  748845 system_pods.go:61] "snapshot-controller-7d9fbc56b8-q69hv" [9ca07ddf-726a-4957-a1cc-d49a887e47c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:24:05.252105  748845 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qvvhb" [430b65a4-81f8-4bff-aae6-f20b59c50ed7] Pending
	I0929 11:24:05.252109  748845 system_pods.go:61] "storage-provisioner" [2e0135fd-ff13-4d65-a10e-53f4d5d8221b] Pending
	I0929 11:24:05.252119  748845 system_pods.go:74] duration metric: took 3.180948ms to wait for pod list to return data ...
	I0929 11:24:05.252131  748845 default_sa.go:34] waiting for default service account to be created ...
	I0929 11:24:05.254080  748845 default_sa.go:45] found service account: "default"
	I0929 11:24:05.254096  748845 default_sa.go:55] duration metric: took 1.95674ms for default service account to be created ...
	I0929 11:24:05.254104  748845 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 11:24:05.256813  748845 system_pods.go:86] 20 kube-system pods found
	I0929 11:24:05.256846  748845 system_pods.go:89] "amd-gpu-device-plugin-jx2jk" [1350438a-4e00-4bc2-a74a-245c5429f7f0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 11:24:05.256859  748845 system_pods.go:89] "coredns-66bc5c9577-hp6gw" [d69b696c-ee30-4118-bc4a-a9289f14367e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:24:05.256869  748845 system_pods.go:89] "csi-hostpath-attacher-0" [33abd188-21d5-4afc-81ee-986a2c9abe04] Pending
	I0929 11:24:05.256873  748845 system_pods.go:89] "csi-hostpath-resizer-0" [927d632a-8f21-442e-b7c1-a030a7ea7050] Pending
	I0929 11:24:05.256877  748845 system_pods.go:89] "csi-hostpathplugin-l5tls" [050695f6-8dbc-465b-9cba-12e4a136d556] Pending
	I0929 11:24:05.256880  748845 system_pods.go:89] "etcd-addons-164332" [67876682-ef5d-4028-8dd1-7959385817db] Running
	I0929 11:24:05.256884  748845 system_pods.go:89] "kindnet-tl4rx" [ed627b9b-f9a1-4f39-ae18-8ca1d302a05e] Running
	I0929 11:24:05.256890  748845 system_pods.go:89] "kube-apiserver-addons-164332" [8fe4a6db-a58f-48b8-8155-74aebb9780c8] Running
	I0929 11:24:05.256893  748845 system_pods.go:89] "kube-controller-manager-addons-164332" [9eb8d4a0-ff92-450f-ace1-fa669f0dd834] Running
	I0929 11:24:05.256902  748845 system_pods.go:89] "kube-ingress-dns-minikube" [1a62559c-f780-4a20-a627-d01379b91cce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:24:05.256912  748845 system_pods.go:89] "kube-proxy-s6bp8" [93ff57fd-b9f0-462a-b281-e71410257955] Running
	I0929 11:24:05.256918  748845 system_pods.go:89] "kube-scheduler-addons-164332" [cfcc711b-5539-4c29-8768-f7dab8f35d27] Running
	I0929 11:24:05.256928  748845 system_pods.go:89] "metrics-server-85b7d694d7-8br9j" [8e2ab083-272f-4c43-9dcf-cf2726a7560d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:24:05.256933  748845 system_pods.go:89] "nvidia-device-plugin-daemonset-z46zt" [8c5b5a65-2856-463b-aa22-640067a5e289] Pending
	I0929 11:24:05.256945  748845 system_pods.go:89] "registry-66898fdd98-cmshl" [5677ae02-8e21-4448-b64a-9eb03b4d372f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:24:05.256956  748845 system_pods.go:89] "registry-creds-764b6fb674-dtvb9" [144622f1-53d1-4a96-98e2-04721adc3e65] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:24:05.256986  748845 system_pods.go:89] "registry-proxy-lrxjf" [abfc2247-51ab-4a89-a23b-3eb3f7ebd7f6] Pending
	I0929 11:24:05.257001  748845 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q69hv" [9ca07ddf-726a-4957-a1cc-d49a887e47c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:24:05.257006  748845 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qvvhb" [430b65a4-81f8-4bff-aae6-f20b59c50ed7] Pending
	I0929 11:24:05.257015  748845 system_pods.go:89] "storage-provisioner" [2e0135fd-ff13-4d65-a10e-53f4d5d8221b] Pending
	I0929 11:24:05.257032  748845 retry.go:31] will retry after 237.784028ms: missing components: kube-dns
	I0929 11:24:05.348612  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:05.500843  748845 system_pods.go:86] 20 kube-system pods found
	I0929 11:24:05.500888  748845 system_pods.go:89] "amd-gpu-device-plugin-jx2jk" [1350438a-4e00-4bc2-a74a-245c5429f7f0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 11:24:05.500900  748845 system_pods.go:89] "coredns-66bc5c9577-hp6gw" [d69b696c-ee30-4118-bc4a-a9289f14367e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:24:05.500909  748845 system_pods.go:89] "csi-hostpath-attacher-0" [33abd188-21d5-4afc-81ee-986a2c9abe04] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:24:05.500917  748845 system_pods.go:89] "csi-hostpath-resizer-0" [927d632a-8f21-442e-b7c1-a030a7ea7050] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 11:24:05.500925  748845 system_pods.go:89] "csi-hostpathplugin-l5tls" [050695f6-8dbc-465b-9cba-12e4a136d556] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:24:05.500931  748845 system_pods.go:89] "etcd-addons-164332" [67876682-ef5d-4028-8dd1-7959385817db] Running
	I0929 11:24:05.500939  748845 system_pods.go:89] "kindnet-tl4rx" [ed627b9b-f9a1-4f39-ae18-8ca1d302a05e] Running
	I0929 11:24:05.500944  748845 system_pods.go:89] "kube-apiserver-addons-164332" [8fe4a6db-a58f-48b8-8155-74aebb9780c8] Running
	I0929 11:24:05.500949  748845 system_pods.go:89] "kube-controller-manager-addons-164332" [9eb8d4a0-ff92-450f-ace1-fa669f0dd834] Running
	I0929 11:24:05.500958  748845 system_pods.go:89] "kube-ingress-dns-minikube" [1a62559c-f780-4a20-a627-d01379b91cce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:24:05.500977  748845 system_pods.go:89] "kube-proxy-s6bp8" [93ff57fd-b9f0-462a-b281-e71410257955] Running
	I0929 11:24:05.500983  748845 system_pods.go:89] "kube-scheduler-addons-164332" [cfcc711b-5539-4c29-8768-f7dab8f35d27] Running
	I0929 11:24:05.500991  748845 system_pods.go:89] "metrics-server-85b7d694d7-8br9j" [8e2ab083-272f-4c43-9dcf-cf2726a7560d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:24:05.501001  748845 system_pods.go:89] "nvidia-device-plugin-daemonset-z46zt" [8c5b5a65-2856-463b-aa22-640067a5e289] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:24:05.501010  748845 system_pods.go:89] "registry-66898fdd98-cmshl" [5677ae02-8e21-4448-b64a-9eb03b4d372f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:24:05.501021  748845 system_pods.go:89] "registry-creds-764b6fb674-dtvb9" [144622f1-53d1-4a96-98e2-04721adc3e65] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:24:05.501030  748845 system_pods.go:89] "registry-proxy-lrxjf" [abfc2247-51ab-4a89-a23b-3eb3f7ebd7f6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:24:05.501040  748845 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q69hv" [9ca07ddf-726a-4957-a1cc-d49a887e47c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:24:05.501049  748845 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qvvhb" [430b65a4-81f8-4bff-aae6-f20b59c50ed7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:24:05.501059  748845 system_pods.go:89] "storage-provisioner" [2e0135fd-ff13-4d65-a10e-53f4d5d8221b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 11:24:05.501080  748845 retry.go:31] will retry after 327.125719ms: missing components: kube-dns
	I0929 11:24:05.686917  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:05.686916  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:05.787347  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:05.887761  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:05.889300  748845 system_pods.go:86] 20 kube-system pods found
	I0929 11:24:05.889334  748845 system_pods.go:89] "amd-gpu-device-plugin-jx2jk" [1350438a-4e00-4bc2-a74a-245c5429f7f0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 11:24:05.889344  748845 system_pods.go:89] "coredns-66bc5c9577-hp6gw" [d69b696c-ee30-4118-bc4a-a9289f14367e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 11:24:05.889354  748845 system_pods.go:89] "csi-hostpath-attacher-0" [33abd188-21d5-4afc-81ee-986a2c9abe04] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:24:05.889365  748845 system_pods.go:89] "csi-hostpath-resizer-0" [927d632a-8f21-442e-b7c1-a030a7ea7050] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 11:24:05.889377  748845 system_pods.go:89] "csi-hostpathplugin-l5tls" [050695f6-8dbc-465b-9cba-12e4a136d556] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:24:05.889384  748845 system_pods.go:89] "etcd-addons-164332" [67876682-ef5d-4028-8dd1-7959385817db] Running
	I0929 11:24:05.889397  748845 system_pods.go:89] "kindnet-tl4rx" [ed627b9b-f9a1-4f39-ae18-8ca1d302a05e] Running
	I0929 11:24:05.889403  748845 system_pods.go:89] "kube-apiserver-addons-164332" [8fe4a6db-a58f-48b8-8155-74aebb9780c8] Running
	I0929 11:24:05.889412  748845 system_pods.go:89] "kube-controller-manager-addons-164332" [9eb8d4a0-ff92-450f-ace1-fa669f0dd834] Running
	I0929 11:24:05.889420  748845 system_pods.go:89] "kube-ingress-dns-minikube" [1a62559c-f780-4a20-a627-d01379b91cce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:24:05.889426  748845 system_pods.go:89] "kube-proxy-s6bp8" [93ff57fd-b9f0-462a-b281-e71410257955] Running
	I0929 11:24:05.889435  748845 system_pods.go:89] "kube-scheduler-addons-164332" [cfcc711b-5539-4c29-8768-f7dab8f35d27] Running
	I0929 11:24:05.889443  748845 system_pods.go:89] "metrics-server-85b7d694d7-8br9j" [8e2ab083-272f-4c43-9dcf-cf2726a7560d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:24:05.889455  748845 system_pods.go:89] "nvidia-device-plugin-daemonset-z46zt" [8c5b5a65-2856-463b-aa22-640067a5e289] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:24:05.889466  748845 system_pods.go:89] "registry-66898fdd98-cmshl" [5677ae02-8e21-4448-b64a-9eb03b4d372f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:24:05.889475  748845 system_pods.go:89] "registry-creds-764b6fb674-dtvb9" [144622f1-53d1-4a96-98e2-04721adc3e65] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:24:05.889483  748845 system_pods.go:89] "registry-proxy-lrxjf" [abfc2247-51ab-4a89-a23b-3eb3f7ebd7f6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:24:05.889493  748845 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q69hv" [9ca07ddf-726a-4957-a1cc-d49a887e47c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:24:05.889504  748845 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qvvhb" [430b65a4-81f8-4bff-aae6-f20b59c50ed7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:24:05.889515  748845 system_pods.go:89] "storage-provisioner" [2e0135fd-ff13-4d65-a10e-53f4d5d8221b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 11:24:05.889537  748845 retry.go:31] will retry after 379.013303ms: missing components: kube-dns
	I0929 11:24:06.187308  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:06.187333  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:06.214805  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:06.273181  748845 system_pods.go:86] 20 kube-system pods found
	I0929 11:24:06.273219  748845 system_pods.go:89] "amd-gpu-device-plugin-jx2jk" [1350438a-4e00-4bc2-a74a-245c5429f7f0] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 11:24:06.273225  748845 system_pods.go:89] "coredns-66bc5c9577-hp6gw" [d69b696c-ee30-4118-bc4a-a9289f14367e] Running
	I0929 11:24:06.273233  748845 system_pods.go:89] "csi-hostpath-attacher-0" [33abd188-21d5-4afc-81ee-986a2c9abe04] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 11:24:06.273239  748845 system_pods.go:89] "csi-hostpath-resizer-0" [927d632a-8f21-442e-b7c1-a030a7ea7050] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 11:24:06.273245  748845 system_pods.go:89] "csi-hostpathplugin-l5tls" [050695f6-8dbc-465b-9cba-12e4a136d556] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 11:24:06.273256  748845 system_pods.go:89] "etcd-addons-164332" [67876682-ef5d-4028-8dd1-7959385817db] Running
	I0929 11:24:06.273260  748845 system_pods.go:89] "kindnet-tl4rx" [ed627b9b-f9a1-4f39-ae18-8ca1d302a05e] Running
	I0929 11:24:06.273264  748845 system_pods.go:89] "kube-apiserver-addons-164332" [8fe4a6db-a58f-48b8-8155-74aebb9780c8] Running
	I0929 11:24:06.273267  748845 system_pods.go:89] "kube-controller-manager-addons-164332" [9eb8d4a0-ff92-450f-ace1-fa669f0dd834] Running
	I0929 11:24:06.273278  748845 system_pods.go:89] "kube-ingress-dns-minikube" [1a62559c-f780-4a20-a627-d01379b91cce] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 11:24:06.273283  748845 system_pods.go:89] "kube-proxy-s6bp8" [93ff57fd-b9f0-462a-b281-e71410257955] Running
	I0929 11:24:06.273286  748845 system_pods.go:89] "kube-scheduler-addons-164332" [cfcc711b-5539-4c29-8768-f7dab8f35d27] Running
	I0929 11:24:06.273291  748845 system_pods.go:89] "metrics-server-85b7d694d7-8br9j" [8e2ab083-272f-4c43-9dcf-cf2726a7560d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 11:24:06.273296  748845 system_pods.go:89] "nvidia-device-plugin-daemonset-z46zt" [8c5b5a65-2856-463b-aa22-640067a5e289] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 11:24:06.273303  748845 system_pods.go:89] "registry-66898fdd98-cmshl" [5677ae02-8e21-4448-b64a-9eb03b4d372f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 11:24:06.273310  748845 system_pods.go:89] "registry-creds-764b6fb674-dtvb9" [144622f1-53d1-4a96-98e2-04721adc3e65] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 11:24:06.273315  748845 system_pods.go:89] "registry-proxy-lrxjf" [abfc2247-51ab-4a89-a23b-3eb3f7ebd7f6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 11:24:06.273324  748845 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q69hv" [9ca07ddf-726a-4957-a1cc-d49a887e47c8] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:24:06.273332  748845 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qvvhb" [430b65a4-81f8-4bff-aae6-f20b59c50ed7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 11:24:06.273335  748845 system_pods.go:89] "storage-provisioner" [2e0135fd-ff13-4d65-a10e-53f4d5d8221b] Running
	I0929 11:24:06.273344  748845 system_pods.go:126] duration metric: took 1.019234847s to wait for k8s-apps to be running ...
	I0929 11:24:06.273355  748845 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 11:24:06.273425  748845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:24:06.287094  748845 system_svc.go:56] duration metric: took 13.723803ms WaitForService to wait for kubelet
	I0929 11:24:06.287131  748845 kubeadm.go:578] duration metric: took 42.764241806s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 11:24:06.287157  748845 node_conditions.go:102] verifying NodePressure condition ...
	I0929 11:24:06.290343  748845 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 11:24:06.290372  748845 node_conditions.go:123] node cpu capacity is 8
	I0929 11:24:06.290405  748845 node_conditions.go:105] duration metric: took 3.241492ms to run NodePressure ...
	I0929 11:24:06.290418  748845 start.go:241] waiting for startup goroutines ...
	I0929 11:24:06.349579  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:06.687251  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:06.687249  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:06.714474  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:06.849571  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:07.187291  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:07.187423  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:07.288667  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:07.348171  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:07.686094  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:07.686741  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:07.713644  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:07.848686  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:08.187212  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:08.187221  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:08.214192  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:08.348900  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:08.686915  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:08.686951  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:08.713680  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:08.848627  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:09.187139  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:09.187204  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:09.214036  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:09.348532  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:09.686666  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:09.686721  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:09.713851  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:09.848519  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:10.186667  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:10.186854  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:10.213356  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:10.349525  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:10.686785  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:10.686808  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:10.714161  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:10.849203  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:11.186513  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:11.186605  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:11.214031  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:11.348487  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:11.686379  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:11.686455  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:11.714336  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:11.848932  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:12.186575  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:12.186605  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:12.212832  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:12.348421  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:12.686322  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:12.686464  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:12.713627  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:12.848202  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:13.187112  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:13.187133  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:13.213556  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:13.348073  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:13.686352  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:13.686379  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:13.714436  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:13.849347  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:14.186415  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:14.186488  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:14.214462  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:14.349321  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:14.686264  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:14.686438  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:14.714437  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:14.849395  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:15.186427  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:15.186569  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:15.213243  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:15.348834  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:15.686835  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:15.687056  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:15.713716  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:15.848624  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:16.187224  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:16.187232  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:16.214018  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:16.349221  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:16.686452  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:16.686644  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:16.713626  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:16.848341  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:17.187262  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:17.187326  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:17.214102  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:17.348483  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:17.686875  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:17.687018  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:17.713905  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:17.848620  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:18.187244  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:18.187286  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:18.214068  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:18.348894  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:18.687057  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:18.687099  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:18.713849  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:18.848658  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:19.186921  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:19.186921  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:19.213975  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:19.349336  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:19.686551  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:19.686634  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:19.713880  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:19.848623  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:20.187320  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:20.187384  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:20.214076  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:20.348765  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:20.687135  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:20.687174  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:20.713997  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:20.848882  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:21.077089  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:24:21.187141  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:21.187142  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:21.214194  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:21.348937  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:24:21.643164  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:24:21.643205  748845 retry.go:31] will retry after 25.819575444s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:24:21.687244  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:21.687356  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:21.714432  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:21.849272  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:22.186638  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:22.186692  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:22.213851  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:22.348417  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:22.686101  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:22.686144  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:22.713467  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:22.849032  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:23.185923  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:23.186033  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:23.213404  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:23.348932  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:23.686101  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:23.686516  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:23.713209  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:23.849109  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:24.186487  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:24.186540  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:24.214158  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:24.348796  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:24.687008  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:24.687023  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:24.714226  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:24.849404  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:25.186316  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:25.186452  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:25.214249  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:25.349148  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:25.689557  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:25.689634  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:25.713937  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:25.848701  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:26.187821  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:26.187884  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:26.214485  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:26.349174  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:26.686878  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:26.686924  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:26.714590  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:26.848507  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:27.187461  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:27.187510  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:27.213779  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:27.348505  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:27.689066  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:27.689177  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:27.714382  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:27.848997  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:28.188019  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:28.188054  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:28.214773  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:28.348838  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:28.687573  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:28.687666  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:28.714011  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:28.849471  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:29.187537  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:29.187571  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:29.215081  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:29.349320  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:29.686833  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:29.686878  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:29.714714  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:29.848548  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:30.187158  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:30.187248  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:30.215138  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:30.349473  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:30.686756  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:30.686843  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:30.713816  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:30.849210  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:31.186848  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:31.186885  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:31.213954  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:31.348794  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:31.687262  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:31.687390  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:31.714674  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:31.849012  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:32.187229  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:32.187229  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:32.288083  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:32.348733  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:32.686928  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:32.686990  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:32.713917  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:32.848334  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:33.186720  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:33.186832  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:33.213327  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:33.348869  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:33.686795  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:33.686934  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:33.712994  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:33.848364  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:34.186641  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:34.186817  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:34.213638  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:34.348234  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:34.686473  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:34.686532  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:34.713747  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:34.848731  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:35.187181  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:35.187218  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:35.214140  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:35.348849  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:35.687418  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:35.687462  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:35.714446  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:35.849259  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:36.278149  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:36.278259  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:36.278393  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:36.531185  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:36.687285  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:36.687350  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:36.714761  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:36.848747  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:37.187308  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:37.187515  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:37.214893  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:37.348773  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:37.687775  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:37.687882  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:37.788667  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:37.848397  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:38.186751  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:38.186804  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:38.214037  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:38.349077  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:38.687177  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:38.687230  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:38.714869  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:38.848770  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:39.187221  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:39.187334  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:39.214334  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:39.349023  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:39.689822  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:39.690051  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:39.714745  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:39.848802  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:40.187854  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:40.187914  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:40.213788  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:40.348873  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:40.687444  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:40.687461  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:40.714221  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:40.849013  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:41.187323  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:41.187442  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:41.214686  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:41.351037  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:41.686406  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:41.686473  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:41.714081  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:41.848569  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:42.187251  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:42.187287  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:42.214083  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:42.348733  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:42.687226  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:42.687271  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:42.714601  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:42.849510  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:43.186911  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:43.186979  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:43.288139  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:43.348700  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:43.686976  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:43.687051  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:43.714136  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:43.848789  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:44.187341  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:44.187388  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:44.214919  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:44.348772  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:44.686893  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:44.686986  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:44.714291  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:44.849139  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:45.186918  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:45.187053  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:45.213528  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:45.349079  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:45.686576  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:45.686647  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:45.713836  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:45.848554  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:46.187028  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:46.187277  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:46.214231  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:46.348757  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:46.687271  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:46.687309  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:46.714279  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:46.848846  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:47.186948  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:47.187109  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:47.214028  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:47.349109  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:47.463205  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 11:24:47.687171  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:47.687176  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:47.714148  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:47.848744  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 11:24:48.014720  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:24:48.014759  748845 retry.go:31] will retry after 34.10702456s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 11:24:48.187032  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:48.187082  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:48.214057  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:48.348902  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:48.687314  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:48.687355  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:48.714612  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:48.848824  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:49.187783  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:49.187917  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:49.214154  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:49.349594  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:49.689475  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:49.689579  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:49.715541  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:49.849555  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:50.187506  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:50.187690  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:50.214138  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:50.348919  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:50.687076  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:50.687182  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:50.714131  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:50.849367  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:51.186399  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:51.186561  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:51.213507  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:51.349629  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:51.686746  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:51.686788  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:51.713412  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:51.849246  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:52.186865  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:52.186886  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:52.213930  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:52.348771  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:52.687014  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:52.687111  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:52.714304  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:52.849084  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:53.187060  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:53.187136  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:53.214061  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:53.348647  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:53.686802  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:53.687001  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:53.713802  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:53.848824  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:54.187318  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:54.187360  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:54.214071  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:54.349252  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:54.686286  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:54.686348  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:54.714696  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:54.848292  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:55.186315  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:55.186363  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:55.214652  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:55.348909  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:55.687185  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:55.687220  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:55.713386  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:55.848891  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:56.186816  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:56.187071  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:56.213924  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:56.348635  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:56.686919  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:56.686989  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:56.714305  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:56.849046  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:57.187138  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:57.187254  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:57.213913  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:57.348714  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:57.686657  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:57.686715  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:57.713374  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:57.849126  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:58.187302  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:58.187421  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:58.214604  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:58.349702  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:58.687136  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:58.687195  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:58.714227  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:58.849143  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:59.186467  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:59.186476  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:59.214787  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:59.349878  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:24:59.686720  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 11:24:59.686752  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:24:59.713268  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:24:59.848801  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:00.187026  748845 kapi.go:107] duration metric: took 1m35.003846743s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 11:25:00.188699  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:00.214218  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:00.348737  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:00.700594  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:00.714199  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:00.848929  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:01.233370  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:01.233382  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:01.349437  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:01.686599  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:01.713452  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:01.849354  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:02.187380  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:02.214589  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:02.348665  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:02.686740  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:02.713631  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:02.848629  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:03.187112  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:03.213985  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:03.348451  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:03.686362  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:03.714238  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:03.849599  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:04.187436  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:04.214001  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:04.349372  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:04.687166  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:04.714780  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:04.851620  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:05.187292  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:05.214368  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:05.349340  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:05.686281  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:05.714434  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:05.849203  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:06.188204  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:06.214607  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:06.348672  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:06.686627  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:06.713621  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:06.848247  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:07.187464  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:07.214463  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:07.348353  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:07.686772  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:07.714026  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:07.849118  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:08.187281  748845 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 11:25:08.214014  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:08.348294  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:08.686655  748845 kapi.go:107] duration metric: took 1m43.503460287s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 11:25:08.713437  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:08.849419  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:09.213933  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:09.348783  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:09.714582  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:09.849106  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:10.214782  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:10.348501  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:10.714523  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:10.848660  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:11.214578  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:11.348394  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:11.713878  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:11.848611  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:12.214584  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:12.347909  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:12.714273  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:12.848779  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:13.214919  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:13.348532  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:13.713925  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:13.849079  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 11:25:14.215062  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:14.349073  748845 kapi.go:107] duration metric: took 1m42.503679675s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 11:25:14.350698  748845 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-164332 cluster.
	I0929 11:25:14.351917  748845 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 11:25:14.353233  748845 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 11:25:14.714393  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:15.214132  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:15.713824  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:16.221431  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:16.715210  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:17.214159  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:17.713741  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:18.214540  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:18.714673  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:19.214654  748845 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 11:25:19.714648  748845 kapi.go:107] duration metric: took 1m54.004256013s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 11:25:22.122979  748845 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 11:25:22.658187  748845 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 11:25:22.658354  748845 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 11:25:22.660363  748845 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, registry-creds, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 11:25:22.661361  748845 addons.go:514] duration metric: took 1m59.13847155s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin storage-provisioner ingress-dns cloud-spanner registry-creds metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 11:25:22.661404  748845 start.go:246] waiting for cluster config update ...
	I0929 11:25:22.661431  748845 start.go:255] writing updated cluster config ...
	I0929 11:25:22.661719  748845 ssh_runner.go:195] Run: rm -f paused
	I0929 11:25:22.665873  748845 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:25:22.670353  748845 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hp6gw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:25:22.674596  748845 pod_ready.go:94] pod "coredns-66bc5c9577-hp6gw" is "Ready"
	I0929 11:25:22.674616  748845 pod_ready.go:86] duration metric: took 4.241191ms for pod "coredns-66bc5c9577-hp6gw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:25:22.676489  748845 pod_ready.go:83] waiting for pod "etcd-addons-164332" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:25:22.680163  748845 pod_ready.go:94] pod "etcd-addons-164332" is "Ready"
	I0929 11:25:22.680183  748845 pod_ready.go:86] duration metric: took 3.673882ms for pod "etcd-addons-164332" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:25:22.681949  748845 pod_ready.go:83] waiting for pod "kube-apiserver-addons-164332" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:25:22.685597  748845 pod_ready.go:94] pod "kube-apiserver-addons-164332" is "Ready"
	I0929 11:25:22.685617  748845 pod_ready.go:86] duration metric: took 3.640896ms for pod "kube-apiserver-addons-164332" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:25:22.687447  748845 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-164332" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:25:23.069506  748845 pod_ready.go:94] pod "kube-controller-manager-addons-164332" is "Ready"
	I0929 11:25:23.069554  748845 pod_ready.go:86] duration metric: took 382.08241ms for pod "kube-controller-manager-addons-164332" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:25:23.269788  748845 pod_ready.go:83] waiting for pod "kube-proxy-s6bp8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:25:23.669958  748845 pod_ready.go:94] pod "kube-proxy-s6bp8" is "Ready"
	I0929 11:25:23.670002  748845 pod_ready.go:86] duration metric: took 400.188182ms for pod "kube-proxy-s6bp8" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:25:23.870149  748845 pod_ready.go:83] waiting for pod "kube-scheduler-addons-164332" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:25:24.269860  748845 pod_ready.go:94] pod "kube-scheduler-addons-164332" is "Ready"
	I0929 11:25:24.269903  748845 pod_ready.go:86] duration metric: took 399.728217ms for pod "kube-scheduler-addons-164332" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 11:25:24.269921  748845 pod_ready.go:40] duration metric: took 1.604013538s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 11:25:24.317461  748845 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 11:25:24.319135  748845 out.go:179] * Done! kubectl is now configured to use "addons-164332" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 11:27:18 addons-164332 crio[931]: time="2025-09-29 11:27:18.106435784Z" level=info msg="Removing pod sandbox: 155d240f54f1fccc25e6ff0c1fa438a6a6b4cdcdcd5b68b0dae1ecd42702ac38" id=e059a0b2-9db6-47df-b148-b780d142c7b1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 11:27:18 addons-164332 crio[931]: time="2025-09-29 11:27:18.112620595Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 11:27:18 addons-164332 crio[931]: time="2025-09-29 11:27:18.112648912Z" level=info msg="Removed pod sandbox: 155d240f54f1fccc25e6ff0c1fa438a6a6b4cdcdcd5b68b0dae1ecd42702ac38" id=e059a0b2-9db6-47df-b148-b780d142c7b1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 11:27:18 addons-164332 crio[931]: time="2025-09-29 11:27:18.113053859Z" level=info msg="Stopping pod sandbox: d87f6fbd8e6e5c21f949449246adbf731d9b32ad3941031d20b89c08f3329b9a" id=73bc2294-65aa-434c-b641-4848fd536f77 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 11:27:18 addons-164332 crio[931]: time="2025-09-29 11:27:18.113093721Z" level=info msg="Stopped pod sandbox (already stopped): d87f6fbd8e6e5c21f949449246adbf731d9b32ad3941031d20b89c08f3329b9a" id=73bc2294-65aa-434c-b641-4848fd536f77 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 11:27:18 addons-164332 crio[931]: time="2025-09-29 11:27:18.113432569Z" level=info msg="Removing pod sandbox: d87f6fbd8e6e5c21f949449246adbf731d9b32ad3941031d20b89c08f3329b9a" id=56efa472-9334-46a2-8bfd-cd30736258ba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 11:27:18 addons-164332 crio[931]: time="2025-09-29 11:27:18.119393500Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 11:27:18 addons-164332 crio[931]: time="2025-09-29 11:27:18.119419682Z" level=info msg="Removed pod sandbox: d87f6fbd8e6e5c21f949449246adbf731d9b32ad3941031d20b89c08f3329b9a" id=56efa472-9334-46a2-8bfd-cd30736258ba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 11:27:18 addons-164332 crio[931]: time="2025-09-29 11:27:18.119746490Z" level=info msg="Stopping pod sandbox: b17b57d793e73484c8ac89e35a66a7081ff0278180186b6e301b8c818cabcd08" id=3caa174d-e594-4c31-86f4-e01c255b3abe name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 11:27:18 addons-164332 crio[931]: time="2025-09-29 11:27:18.119783626Z" level=info msg="Stopped pod sandbox (already stopped): b17b57d793e73484c8ac89e35a66a7081ff0278180186b6e301b8c818cabcd08" id=3caa174d-e594-4c31-86f4-e01c255b3abe name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 11:27:18 addons-164332 crio[931]: time="2025-09-29 11:27:18.120123968Z" level=info msg="Removing pod sandbox: b17b57d793e73484c8ac89e35a66a7081ff0278180186b6e301b8c818cabcd08" id=d122b295-872f-41a7-8ca9-395a589b7adf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 11:27:18 addons-164332 crio[931]: time="2025-09-29 11:27:18.126758476Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 11:27:18 addons-164332 crio[931]: time="2025-09-29 11:27:18.126786963Z" level=info msg="Removed pod sandbox: b17b57d793e73484c8ac89e35a66a7081ff0278180186b6e301b8c818cabcd08" id=d122b295-872f-41a7-8ca9-395a589b7adf name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 11:28:25 addons-164332 crio[931]: time="2025-09-29 11:28:25.778920217Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-hj9wg/POD" id=1443550f-355d-4b96-bf2c-4fadb94123c0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 11:28:25 addons-164332 crio[931]: time="2025-09-29 11:28:25.779034496Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 11:28:25 addons-164332 crio[931]: time="2025-09-29 11:28:25.798660420Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-hj9wg Namespace:default ID:530c23ba24eb62f529c54cb231444546b3fdd71031536002736d8ba974a53f65 UID:8f4b09df-8029-4d0f-a75e-166bfe6d126c NetNS:/var/run/netns/12416acb-67d9-4084-829b-ad19c0c9af8b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 11:28:25 addons-164332 crio[931]: time="2025-09-29 11:28:25.798692598Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-hj9wg to CNI network \"kindnet\" (type=ptp)"
	Sep 29 11:28:25 addons-164332 crio[931]: time="2025-09-29 11:28:25.809458791Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-hj9wg Namespace:default ID:530c23ba24eb62f529c54cb231444546b3fdd71031536002736d8ba974a53f65 UID:8f4b09df-8029-4d0f-a75e-166bfe6d126c NetNS:/var/run/netns/12416acb-67d9-4084-829b-ad19c0c9af8b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 11:28:25 addons-164332 crio[931]: time="2025-09-29 11:28:25.809631705Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-hj9wg for CNI network kindnet (type=ptp)"
	Sep 29 11:28:25 addons-164332 crio[931]: time="2025-09-29 11:28:25.810729041Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 11:28:25 addons-164332 crio[931]: time="2025-09-29 11:28:25.812015523Z" level=info msg="Ran pod sandbox 530c23ba24eb62f529c54cb231444546b3fdd71031536002736d8ba974a53f65 with infra container: default/hello-world-app-5d498dc89-hj9wg/POD" id=1443550f-355d-4b96-bf2c-4fadb94123c0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 11:28:25 addons-164332 crio[931]: time="2025-09-29 11:28:25.813566924Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=b023ee27-822a-4ec1-a82b-6558ebb97232 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:28:25 addons-164332 crio[931]: time="2025-09-29 11:28:25.813851905Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=b023ee27-822a-4ec1-a82b-6558ebb97232 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:28:25 addons-164332 crio[931]: time="2025-09-29 11:28:25.814491550Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=e2be7877-9596-458a-a558-d5f41086da6a name=/runtime.v1.ImageService/PullImage
	Sep 29 11:28:25 addons-164332 crio[931]: time="2025-09-29 11:28:25.819177342Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	55553200dc9a1       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago       Running             nginx                     0                   63fad45b02347       nginx
	135b7300a1543       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   6f65dd85f1692       busybox
	28d7b317bb74b       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago       Running             controller                0                   d309123a54d5f       ingress-nginx-controller-9cc49f96f-nzzvw
	e775ff8fa1d55       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            3 minutes ago       Running             gadget                    0                   4eea51d32e1bd       gadget-zb8f7
	e67a267ea5d5e       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             3 minutes ago       Exited              patch                     2                   8cdde1cfc48f3       ingress-nginx-admission-patch-69drb
	711f1cee30f6a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago       Exited              create                    0                   515c4674a01a3       ingress-nginx-admission-create-c8tjp
	66c49c91abf23       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   42caf2c4837d3       local-path-provisioner-648f6765c9-gdsx4
	39fdb558ad650       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   1fc0cec8cbe9b       kube-ingress-dns-minikube
	57b2c0387a041       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   6e1b571161e9d       coredns-66bc5c9577-hp6gw
	5845d4d410121       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   01b37873048b2       storage-provisioner
	2a8b9e576c68c       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             5 minutes ago       Running             kube-proxy                0                   58d45d69ebc0c       kube-proxy-s6bp8
	a19fdaf0c430f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                             5 minutes ago       Running             kindnet-cni               0                   df1225d5a28d8       kindnet-tl4rx
	cdece6fbe291f       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             5 minutes ago       Running             kube-controller-manager   0                   9b390bba996ce       kube-controller-manager-addons-164332
	60f6ac3107d73       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             5 minutes ago       Running             kube-apiserver            0                   b20fa5b44749a       kube-apiserver-addons-164332
	98a27320b4dee       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             5 minutes ago       Running             kube-scheduler            0                   22336a44130a1       kube-scheduler-addons-164332
	e51caf646ac73       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   f9cd0b0e3dcc4       etcd-addons-164332
	
	
	==> coredns [57b2c0387a04152433d18b81331cae5fb1e8034f75478bea10b45e1f718228b5] <==
	[INFO] 10.244.0.19:55504 - 48927 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000097041s
	[INFO] 10.244.0.19:34548 - 4191 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000070493s
	[INFO] 10.244.0.19:34548 - 3898 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000114342s
	[INFO] 10.244.0.19:49913 - 2675 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000048075s
	[INFO] 10.244.0.19:49913 - 2405 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000068954s
	[INFO] 10.244.0.19:33287 - 5782 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000091893s
	[INFO] 10.244.0.19:33287 - 5563 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000150612s
	[INFO] 10.244.0.22:58203 - 48411 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000214549s
	[INFO] 10.244.0.22:39739 - 60585 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000306032s
	[INFO] 10.244.0.22:52623 - 8791 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000201769s
	[INFO] 10.244.0.22:33302 - 2697 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000194725s
	[INFO] 10.244.0.22:45169 - 30110 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000167197s
	[INFO] 10.244.0.22:39758 - 60139 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00023989s
	[INFO] 10.244.0.22:48450 - 8054 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.002380658s
	[INFO] 10.244.0.22:33011 - 21589 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.002658475s
	[INFO] 10.244.0.22:35302 - 51961 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004960585s
	[INFO] 10.244.0.22:38652 - 63139 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.010099663s
	[INFO] 10.244.0.22:51396 - 38500 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004133636s
	[INFO] 10.244.0.22:48309 - 17457 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004273911s
	[INFO] 10.244.0.22:33670 - 33027 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.003630595s
	[INFO] 10.244.0.22:60335 - 38983 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004079957s
	[INFO] 10.244.0.22:51839 - 27838 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001933331s
	[INFO] 10.244.0.22:48265 - 52920 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001983851s
	[INFO] 10.244.0.25:44505 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000246088s
	[INFO] 10.244.0.25:38286 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148539s
	
	
	==> describe nodes <==
	Name:               addons-164332
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-164332
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=addons-164332
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_23_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-164332
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:23:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-164332
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:28:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:26:52 +0000   Mon, 29 Sep 2025 11:23:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:26:52 +0000   Mon, 29 Sep 2025 11:23:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:26:52 +0000   Mon, 29 Sep 2025 11:23:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:26:52 +0000   Mon, 29 Sep 2025 11:24:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-164332
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 fe2add74074f49259bf5be971becd84f
	  System UUID:                eaa8d558-dad8-4f6d-9401-eb27923f3a7b
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  default                     hello-world-app-5d498dc89-hj9wg             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-zb8f7                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-nzzvw    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         5m2s
	  kube-system                 coredns-66bc5c9577-hp6gw                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m4s
	  kube-system                 etcd-addons-164332                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m10s
	  kube-system                 kindnet-tl4rx                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m4s
	  kube-system                 kube-apiserver-addons-164332                250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-controller-manager-addons-164332       200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-proxy-s6bp8                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-scheduler-addons-164332                100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  local-path-storage          local-path-provisioner-648f6765c9-gdsx4     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m2s   kube-proxy       
	  Normal  Starting                 5m10s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m10s  kubelet          Node addons-164332 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m10s  kubelet          Node addons-164332 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m10s  kubelet          Node addons-164332 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m5s   node-controller  Node addons-164332 event: Registered Node addons-164332 in Controller
	  Normal  NodeReady                4m22s  kubelet          Node addons-164332 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff b2 ca ec 61 42 22 08 06
	[  +4.588441] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 40 d2 b8 e9 db 08 06
	[Sep29 11:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +1.000205] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +1.024911] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +1.022908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +1.023945] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +1.023904] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +2.047860] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +4.032732] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +8.190439] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[ +16.382949] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[Sep29 11:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	
	
	==> etcd [e51caf646ac7355c063b6934620530c96b8b9bb3c8c553564da52b824e7735f6] <==
	{"level":"warn","ts":"2025-09-29T11:23:15.133988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:15.140374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:15.147052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:15.152830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:15.159440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:15.165473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:15.179112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:15.182243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:15.188166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:15.193985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:15.244182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:26.095219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:26.101693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:52.639421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:52.646356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:23:52.668100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:24:36.528811Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.206446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:24:36.528956Z","caller":"traceutil/trace.go:172","msg":"trace[40505122] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotcontents; range_end:; response_count:0; response_revision:1051; }","duration":"183.364244ms","start":"2025-09-29T11:24:36.345570Z","end":"2025-09-29T11:24:36.528935Z","steps":["trace[40505122] 'agreement among raft nodes before linearized reading'  (duration: 59.714342ms)","trace[40505122] 'range keys from in-memory index tree'  (duration: 123.450659ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:24:36.529489Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.737416ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040293638768586 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/snapshot-controller-7d9fbc56b8-qvvhb.1869bd206793b9f1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/snapshot-controller-7d9fbc56b8-qvvhb.1869bd206793b9f1\" value_size:707 lease:8128040293638768230 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-09-29T11:24:36.529573Z","caller":"traceutil/trace.go:172","msg":"trace[16595826] linearizableReadLoop","detail":"{readStateIndex:1085; appliedIndex:1084; }","duration":"124.323419ms","start":"2025-09-29T11:24:36.405239Z","end":"2025-09-29T11:24:36.529563Z","steps":["trace[16595826] 'read index received'  (duration: 66.316µs)","trace[16595826] 'applied index is now lower than readState.Index'  (duration: 124.256274ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T11:24:36.529589Z","caller":"traceutil/trace.go:172","msg":"trace[2086253387] transaction","detail":"{read_only:false; response_revision:1052; number_of_response:1; }","duration":"186.08643ms","start":"2025-09-29T11:24:36.343479Z","end":"2025-09-29T11:24:36.529566Z","steps":["trace[2086253387] 'process raft request'  (duration: 61.795195ms)","trace[2086253387] 'compare'  (duration: 123.591608ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-29T11:24:36.529657Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"182.373366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:24:36.529748Z","caller":"traceutil/trace.go:172","msg":"trace[1167040583] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1052; }","duration":"182.467923ms","start":"2025-09-29T11:24:36.347270Z","end":"2025-09-29T11:24:36.529738Z","steps":["trace[1167040583] 'agreement among raft nodes before linearized reading'  (duration: 182.334351ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T11:24:51.177895Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"156.677089ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T11:24:51.177983Z","caller":"traceutil/trace.go:172","msg":"trace[829627050] range","detail":"{range_begin:/registry/configmaps; range_end:; response_count:0; response_revision:1128; }","duration":"156.760865ms","start":"2025-09-29T11:24:51.021193Z","end":"2025-09-29T11:24:51.177954Z","steps":["trace[829627050] 'range keys from in-memory index tree'  (duration: 156.585296ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:28:27 up  4:10,  0 users,  load average: 0.35, 0.76, 11.52
	Linux addons-164332 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [a19fdaf0c430f9fc5d15c041f983e77ebb605f364d5504ee77dd0cd39bfc8057] <==
	I0929 11:26:24.788637       1 main.go:301] handling current node
	I0929 11:26:34.789168       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:26:34.789198       1 main.go:301] handling current node
	I0929 11:26:44.791054       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:26:44.791089       1 main.go:301] handling current node
	I0929 11:26:54.789022       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:26:54.789063       1 main.go:301] handling current node
	I0929 11:27:04.790093       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:27:04.790125       1 main.go:301] handling current node
	I0929 11:27:14.791273       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:27:14.791307       1 main.go:301] handling current node
	I0929 11:27:24.791618       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:27:24.791657       1 main.go:301] handling current node
	I0929 11:27:34.790067       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:27:34.790109       1 main.go:301] handling current node
	I0929 11:27:44.790127       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:27:44.790164       1 main.go:301] handling current node
	I0929 11:27:54.796213       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:27:54.796251       1 main.go:301] handling current node
	I0929 11:28:04.793578       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:28:04.793611       1 main.go:301] handling current node
	I0929 11:28:14.796671       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:28:14.796703       1 main.go:301] handling current node
	I0929 11:28:24.796029       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:28:24.796059       1 main.go:301] handling current node
	
	
	==> kube-apiserver [60f6ac3107d734d2e28a2988a2be387c1d68a1871d84174f9dbc9a1049e933e5] <==
	E0929 11:25:36.204587       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34798: use of closed network connection
	I0929 11:25:45.218074       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.109.98.100"}
	I0929 11:25:47.112831       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:26:01.379089       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 11:26:01.571581       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.34.193"}
	I0929 11:26:15.063580       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 11:26:31.080624       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0929 11:26:35.938260       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:26:47.650282       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:26:47.650335       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:26:47.665508       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:26:47.665651       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:26:47.667320       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:26:47.667358       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:26:47.678261       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:26:47.678316       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 11:26:47.689782       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 11:26:47.689821       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0929 11:26:48.667555       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0929 11:26:48.690784       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0929 11:26:48.699655       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0929 11:27:00.579383       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:28:02.303839       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:28:16.534851       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:28:25.550928       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.22.156"}
	
	
	==> kube-controller-manager [cdece6fbe291f5e1e76ea5ac8f0c7093935ffbf075c9e8d5dfddfe2c53ab6cd8] <==
	I0929 11:26:52.781225       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 11:26:55.059606       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:26:55.060651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:26:55.796812       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:26:55.797755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:26:56.358245       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:26:56.359384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:27:03.784264       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:27:03.785503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:27:06.614263       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:27:06.615236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:27:07.671951       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:27:07.672850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:27:29.191398       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:27:29.192424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:27:29.804345       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:27:29.805341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:27:32.617103       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:27:32.618050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:28:00.079106       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:28:00.080132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:28:06.650945       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:28:06.651958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 11:28:12.959716       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 11:28:12.960622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [2a8b9e576c68c0d340b4f939c77a80cab3457047cf53291f418c00d6a9dc949f] <==
	I0929 11:23:24.480916       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:23:24.775341       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:23:24.881792       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:23:24.881846       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 11:23:24.881996       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:23:24.947899       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 11:23:24.948015       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:23:24.961428       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:23:24.962271       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:23:24.962534       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:23:24.971040       1 config.go:200] "Starting service config controller"
	I0929 11:23:24.972225       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:23:24.971581       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:23:24.972507       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:23:24.971771       1 config.go:309] "Starting node config controller"
	I0929 11:23:24.972558       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:23:24.972586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:23:24.971568       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:23:24.972627       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:23:25.072945       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:23:25.073064       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:23:25.073072       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [98a27320b4dee946b79fe3920459e56d09e0e8fde60a3ad90d6bf74f949a1355] <==
	E0929 11:23:15.632952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:23:15.632972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:23:15.633095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:23:15.633053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:23:15.633171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:23:15.633171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:23:15.633251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 11:23:15.633298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:23:15.633312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:23:15.633325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:23:15.633386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 11:23:15.633412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:23:16.546459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 11:23:16.580013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:23:16.640852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:23:16.690930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:23:16.694893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 11:23:16.712953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:23:16.756283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:23:16.762417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:23:16.802469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:23:16.825676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 11:23:16.847173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:23:16.853200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I0929 11:23:18.431347       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 11:26:49 addons-164332 kubelet[1560]: I0929 11:26:49.869744    1560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="927d632a-8f21-442e-b7c1-a030a7ea7050" path="/var/lib/kubelet/pods/927d632a-8f21-442e-b7c1-a030a7ea7050/volumes"
	Sep 29 11:26:49 addons-164332 kubelet[1560]: I0929 11:26:49.870004    1560 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca07ddf-726a-4957-a1cc-d49a887e47c8" path="/var/lib/kubelet/pods/9ca07ddf-726a-4957-a1cc-d49a887e47c8/volumes"
	Sep 29 11:26:57 addons-164332 kubelet[1560]: E0929 11:26:57.909937    1560 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145217909705530  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:26:57 addons-164332 kubelet[1560]: E0929 11:26:57.909984    1560 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145217909705530  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:26:59 addons-164332 kubelet[1560]: I0929 11:26:59.867754    1560 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 11:27:07 addons-164332 kubelet[1560]: E0929 11:27:07.912788    1560 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145227912534557  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:27:07 addons-164332 kubelet[1560]: E0929 11:27:07.912828    1560 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145227912534557  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:27:17 addons-164332 kubelet[1560]: E0929 11:27:17.914544    1560 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145237914310637  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:27:17 addons-164332 kubelet[1560]: E0929 11:27:17.914580    1560 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145237914310637  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:27:18 addons-164332 kubelet[1560]: I0929 11:27:18.028106    1560 scope.go:117] "RemoveContainer" containerID="1d85659ba006565f3976f8ed8246c4feb66c59f52bf9183849c64512467c895a"
	Sep 29 11:27:18 addons-164332 kubelet[1560]: I0929 11:27:18.046947    1560 scope.go:117] "RemoveContainer" containerID="8394bdfe4a326d3676b217431118be8a730a2b57dc658a64936bafd90a071cc7"
	Sep 29 11:27:27 addons-164332 kubelet[1560]: E0929 11:27:27.916372    1560 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145247916147895  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:27:27 addons-164332 kubelet[1560]: E0929 11:27:27.916405    1560 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145247916147895  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:27:37 addons-164332 kubelet[1560]: E0929 11:27:37.919122    1560 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145257918899639  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:27:37 addons-164332 kubelet[1560]: E0929 11:27:37.919157    1560 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145257918899639  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:27:47 addons-164332 kubelet[1560]: E0929 11:27:47.922098    1560 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145267921852276  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:27:47 addons-164332 kubelet[1560]: E0929 11:27:47.922133    1560 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145267921852276  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:27:57 addons-164332 kubelet[1560]: E0929 11:27:57.924700    1560 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145277924464997  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:27:57 addons-164332 kubelet[1560]: E0929 11:27:57.924728    1560 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145277924464997  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:28:02 addons-164332 kubelet[1560]: I0929 11:28:02.867442    1560 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 11:28:07 addons-164332 kubelet[1560]: E0929 11:28:07.926670    1560 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145287926468951  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:28:07 addons-164332 kubelet[1560]: E0929 11:28:07.926699    1560 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145287926468951  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:28:17 addons-164332 kubelet[1560]: E0929 11:28:17.928819    1560 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759145297928612823  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:28:17 addons-164332 kubelet[1560]: E0929 11:28:17.928856    1560 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759145297928612823  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 11:28:25 addons-164332 kubelet[1560]: I0929 11:28:25.517956    1560 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6rpp\" (UniqueName: \"kubernetes.io/projected/8f4b09df-8029-4d0f-a75e-166bfe6d126c-kube-api-access-w6rpp\") pod \"hello-world-app-5d498dc89-hj9wg\" (UID: \"8f4b09df-8029-4d0f-a75e-166bfe6d126c\") " pod="default/hello-world-app-5d498dc89-hj9wg"
	
	
	==> storage-provisioner [5845d4d4101216efd06f64dc5c33de6dd9f9aef3ca1f370112a66577f3e9208a] <==
	W0929 11:28:02.806750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:04.810157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:04.814526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:06.818362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:06.822488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:08.825518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:08.829445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:10.832728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:10.836513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:12.840165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:12.844034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:14.846566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:14.850130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:16.853251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:16.856910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:18.860607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:18.864404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:20.867389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:20.871116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:22.873463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:22.877644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:24.880240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:24.884722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:26.888662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:28:26.893384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-164332 -n addons-164332
helpers_test.go:269: (dbg) Run:  kubectl --context addons-164332 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-hj9wg ingress-nginx-admission-create-c8tjp ingress-nginx-admission-patch-69drb
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-164332 describe pod hello-world-app-5d498dc89-hj9wg ingress-nginx-admission-create-c8tjp ingress-nginx-admission-patch-69drb
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-164332 describe pod hello-world-app-5d498dc89-hj9wg ingress-nginx-admission-create-c8tjp ingress-nginx-admission-patch-69drb: exit status 1 (70.105488ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-hj9wg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-164332/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:28:25 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w6rpp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-w6rpp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-hj9wg to addons-164332
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-c8tjp" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-69drb" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-164332 describe pod hello-world-app-5d498dc89-hj9wg ingress-nginx-admission-create-c8tjp ingress-nginx-admission-patch-69drb: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-164332 addons disable ingress --alsologtostderr -v=1: (7.702986561s)
--- FAIL: TestAddons/parallel/Ingress (155.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-550377 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-550377 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-nzqfl" [440cc81d-e888-42b6-9f9d-29e01eb75600] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-550377 -n functional-550377
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-29 11:41:58.739686466 +0000 UTC m=+1188.256249382
functional_test.go:1645: (dbg) Run:  kubectl --context functional-550377 describe po hello-node-connect-7d85dfc575-nzqfl -n default
functional_test.go:1645: (dbg) kubectl --context functional-550377 describe po hello-node-connect-7d85dfc575-nzqfl -n default:
Name:             hello-node-connect-7d85dfc575-nzqfl
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-550377/192.168.49.2
Start Time:       Mon, 29 Sep 2025 11:31:58 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lwq5f (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lwq5f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-nzqfl to functional-550377
Normal   Pulling    6m54s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m54s (x5 over 9m53s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m54s (x5 over 9m53s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m43s (x21 over 9m52s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m43s (x21 over 9m52s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-550377 logs hello-node-connect-7d85dfc575-nzqfl -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-550377 logs hello-node-connect-7d85dfc575-nzqfl -n default: exit status 1 (70.194312ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-nzqfl" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-550377 logs hello-node-connect-7d85dfc575-nzqfl -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-550377 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-nzqfl
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-550377/192.168.49.2
Start Time:       Mon, 29 Sep 2025 11:31:58 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lwq5f (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lwq5f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-nzqfl to functional-550377
Normal   Pulling    6m54s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m54s (x5 over 9m53s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m54s (x5 over 9m53s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m43s (x21 over 9m52s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m43s (x21 over 9m52s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-550377 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-550377 logs -l app=hello-node-connect: exit status 1 (99.249737ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-nzqfl" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-550377 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-550377 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.252.138
IPs:                      10.108.252.138
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31474/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-550377
helpers_test.go:243: (dbg) docker inspect functional-550377:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "962353d2917fb9c1187842777ed82c988bebba0abf05fab05f6cf4219a62c38e",
	        "Created": "2025-09-29T11:29:34.775311927Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 773837,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T11:29:34.80523425Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/962353d2917fb9c1187842777ed82c988bebba0abf05fab05f6cf4219a62c38e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/962353d2917fb9c1187842777ed82c988bebba0abf05fab05f6cf4219a62c38e/hostname",
	        "HostsPath": "/var/lib/docker/containers/962353d2917fb9c1187842777ed82c988bebba0abf05fab05f6cf4219a62c38e/hosts",
	        "LogPath": "/var/lib/docker/containers/962353d2917fb9c1187842777ed82c988bebba0abf05fab05f6cf4219a62c38e/962353d2917fb9c1187842777ed82c988bebba0abf05fab05f6cf4219a62c38e-json.log",
	        "Name": "/functional-550377",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-550377:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-550377",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "962353d2917fb9c1187842777ed82c988bebba0abf05fab05f6cf4219a62c38e",
	                "LowerDir": "/var/lib/docker/overlay2/064e4e23afd7790aedd6ac5312d3ae888345a8d796ed0702b523ff6ee5d1cd46-init/diff:/var/lib/docker/overlay2/42045f7131296b05e4732d8df48574b1ff4b00e9dbcd57ed60e11052fef55646/diff",
	                "MergedDir": "/var/lib/docker/overlay2/064e4e23afd7790aedd6ac5312d3ae888345a8d796ed0702b523ff6ee5d1cd46/merged",
	                "UpperDir": "/var/lib/docker/overlay2/064e4e23afd7790aedd6ac5312d3ae888345a8d796ed0702b523ff6ee5d1cd46/diff",
	                "WorkDir": "/var/lib/docker/overlay2/064e4e23afd7790aedd6ac5312d3ae888345a8d796ed0702b523ff6ee5d1cd46/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-550377",
	                "Source": "/var/lib/docker/volumes/functional-550377/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-550377",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-550377",
	                "name.minikube.sigs.k8s.io": "functional-550377",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8632ae78ef9e97ff237f6b722bde706cc089bd45a98ac1957822ed1ea81bbc98",
	            "SandboxKey": "/var/run/docker/netns/8632ae78ef9e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-550377": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:48:7c:0d:bb:4e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b46b74920a3a3fba500ea3f66ac9b551863f61987fd4be392bdaf745c80e7d25",
	                    "EndpointID": "dc847c8c97b6fe0633852029699dac5fa194e78dead4549f7e8abbe6f1904dda",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-550377",
	                        "962353d2917f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-550377 -n functional-550377
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-550377 logs -n 25: (1.467057364s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-550377 ssh -- ls -la /mount-9p                                                                          │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ ssh            │ functional-550377 ssh sudo umount -f /mount-9p                                                                     │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │                     │
	│ mount          │ -p functional-550377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3267768702/001:/mount3 --alsologtostderr -v=1 │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │                     │
	│ mount          │ -p functional-550377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3267768702/001:/mount1 --alsologtostderr -v=1 │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │                     │
	│ mount          │ -p functional-550377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3267768702/001:/mount2 --alsologtostderr -v=1 │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │                     │
	│ ssh            │ functional-550377 ssh findmnt -T /mount1                                                                           │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │                     │
	│ ssh            │ functional-550377 ssh findmnt -T /mount1                                                                           │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ ssh            │ functional-550377 ssh findmnt -T /mount2                                                                           │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ ssh            │ functional-550377 ssh findmnt -T /mount3                                                                           │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ mount          │ -p functional-550377 --kill=true                                                                                   │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │                     │
	│ start          │ -p functional-550377 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │                     │
	│ start          │ -p functional-550377 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │                     │
	│ start          │ -p functional-550377 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-550377 --alsologtostderr -v=1                                                     │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ license        │                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ update-context │ functional-550377 update-context --alsologtostderr -v=2                                                            │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ update-context │ functional-550377 update-context --alsologtostderr -v=2                                                            │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ update-context │ functional-550377 update-context --alsologtostderr -v=2                                                            │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ image          │ functional-550377 image ls --format short --alsologtostderr                                                        │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ image          │ functional-550377 image ls --format yaml --alsologtostderr                                                         │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ ssh            │ functional-550377 ssh pgrep buildkitd                                                                              │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │                     │
	│ image          │ functional-550377 image build -t localhost/my-image:functional-550377 testdata/build --alsologtostderr             │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ image          │ functional-550377 image ls                                                                                         │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ image          │ functional-550377 image ls --format json --alsologtostderr                                                         │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	│ image          │ functional-550377 image ls --format table --alsologtostderr                                                        │ functional-550377 │ jenkins │ v1.37.0 │ 29 Sep 25 11:32 UTC │ 29 Sep 25 11:32 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:32:19
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:32:19.995170  790155 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:32:19.995407  790155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:32:19.995415  790155 out.go:374] Setting ErrFile to fd 2...
	I0929 11:32:19.995419  790155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:32:19.995615  790155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
	I0929 11:32:19.996088  790155 out.go:368] Setting JSON to false
	I0929 11:32:19.997069  790155 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":15277,"bootTime":1759130263,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:32:19.997127  790155 start.go:140] virtualization: kvm guest
	I0929 11:32:19.998809  790155 out.go:179] * [functional-550377] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:32:20.000022  790155 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 11:32:20.000076  790155 notify.go:220] Checking for updates...
	I0929 11:32:20.002115  790155 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:32:20.003274  790155 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig
	I0929 11:32:20.004267  790155 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube
	I0929 11:32:20.005275  790155 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:32:20.006401  790155 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:32:20.007845  790155 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:32:20.008320  790155 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:32:20.032638  790155 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:32:20.032727  790155 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:32:20.090156  790155 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 11:32:20.079883582 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:32:20.090289  790155 docker.go:318] overlay module found
	I0929 11:32:20.091880  790155 out.go:179] * Using the docker driver based on existing profile
	I0929 11:32:20.092884  790155 start.go:304] selected driver: docker
	I0929 11:32:20.092897  790155 start.go:924] validating driver "docker" against &{Name:functional-550377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-550377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:32:20.093013  790155 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:32:20.093099  790155 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:32:20.148436  790155 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 11:32:20.138875185 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:32:20.149143  790155 cni.go:84] Creating CNI manager for ""
	I0929 11:32:20.149202  790155 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:32:20.149249  790155 start.go:348] cluster config:
	{Name:functional-550377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-550377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:32:20.150928  790155 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 29 11:32:25 functional-550377 crio[4240]: time="2025-09-29 11:32:25.054090936Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 11:32:25 functional-550377 crio[4240]: time="2025-09-29 11:32:25.064289023Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2b5648bac95e4594530b26740e0fe4ab73bc26e70d2985082b0a0d5ad18dea91/merged/etc/group: no such file or directory"
	Sep 29 11:32:25 functional-550377 crio[4240]: time="2025-09-29 11:32:25.118576070Z" level=info msg="Created container c80cec7913049fea15af5e12da4316ea13ad4bda3da90fc6c45da088140807af: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4qztg/dashboard-metrics-scraper" id=188b06da-6ff4-4e6e-b6ef-7b1d20508ae0 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 11:32:25 functional-550377 crio[4240]: time="2025-09-29 11:32:25.119205266Z" level=info msg="Starting container: c80cec7913049fea15af5e12da4316ea13ad4bda3da90fc6c45da088140807af" id=11eede7e-753c-48fa-99a4-50ee3272278d name=/runtime.v1.RuntimeService/StartContainer
	Sep 29 11:32:25 functional-550377 crio[4240]: time="2025-09-29 11:32:25.124998592Z" level=info msg="Started container" PID=8619 containerID=c80cec7913049fea15af5e12da4316ea13ad4bda3da90fc6c45da088140807af description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-4qztg/dashboard-metrics-scraper id=11eede7e-753c-48fa-99a4-50ee3272278d name=/runtime.v1.RuntimeService/StartContainer sandboxID=d137e77747f260cac949e76097ef3725c6b037c6f6d53332afe644fee4e576ac
	Sep 29 11:32:26 functional-550377 crio[4240]: time="2025-09-29 11:32:26.185048120Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 29 11:32:30 functional-550377 crio[4240]: time="2025-09-29 11:32:30.632900520Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=07fe0c0b-bf84-425d-b67f-2fec26b0fbb0 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:32:30 functional-550377 crio[4240]: time="2025-09-29 11:32:30.633482402Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6dcae163-6a8d-404b-bed1-35e6254462ca name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:32:30 functional-550377 crio[4240]: time="2025-09-29 11:32:30.634239911Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029],Size_:249229937,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=6dcae163-6a8d-404b-bed1-35e6254462ca name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:32:30 functional-550377 crio[4240]: time="2025-09-29 11:32:30.635001156Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=69dfe818-ad8d-4e2b-ab12-63c48a006a3a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:32:30 functional-550377 crio[4240]: time="2025-09-29 11:32:30.636060938Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029],Size_:249229937,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=69dfe818-ad8d-4e2b-ab12-63c48a006a3a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 11:32:30 functional-550377 crio[4240]: time="2025-09-29 11:32:30.638821865Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2wd82/kubernetes-dashboard" id=aec4f6bb-4ee7-4f06-8d72-b24ecf1791b1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 11:32:30 functional-550377 crio[4240]: time="2025-09-29 11:32:30.638912077Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 11:32:30 functional-550377 crio[4240]: time="2025-09-29 11:32:30.649013007Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ed0e0ab9f12fddc98d229d929995bda65a88c470d5f159b37e084154a386b0b3/merged/etc/group: no such file or directory"
	Sep 29 11:32:30 functional-550377 crio[4240]: time="2025-09-29 11:32:30.704778405Z" level=info msg="Created container 92aaaf745c35b1a4faa5e3e29fa277e90f665cc91904d877e082866f18d3cb97: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2wd82/kubernetes-dashboard" id=aec4f6bb-4ee7-4f06-8d72-b24ecf1791b1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 11:32:30 functional-550377 crio[4240]: time="2025-09-29 11:32:30.705522390Z" level=info msg="Starting container: 92aaaf745c35b1a4faa5e3e29fa277e90f665cc91904d877e082866f18d3cb97" id=0e7d5abe-e68c-4d52-ba49-be52adaccc81 name=/runtime.v1.RuntimeService/StartContainer
	Sep 29 11:32:30 functional-550377 crio[4240]: time="2025-09-29 11:32:30.711351443Z" level=info msg="Started container" PID=9017 containerID=92aaaf745c35b1a4faa5e3e29fa277e90f665cc91904d877e082866f18d3cb97 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-2wd82/kubernetes-dashboard id=0e7d5abe-e68c-4d52-ba49-be52adaccc81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=085816c4e93df562b64cb13127cde4c8d369fea274574d8790d5d73d3a42307d
	Sep 29 11:32:41 functional-550377 crio[4240]: time="2025-09-29 11:32:41.380668913Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=67c1b8bd-289c-4707-b481-42615bce1daa name=/runtime.v1.ImageService/PullImage
	Sep 29 11:32:42 functional-550377 crio[4240]: time="2025-09-29 11:32:42.380242714Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2a630955-0ae4-469e-bb51-3fa5945494c7 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:33:31 functional-550377 crio[4240]: time="2025-09-29 11:33:31.380145500Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7ab0283c-fc00-4831-ab52-68fbefb31c90 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:33:32 functional-550377 crio[4240]: time="2025-09-29 11:33:32.379787450Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a4dac46a-d150-4869-b7cc-040bc6d7e33d name=/runtime.v1.ImageService/PullImage
	Sep 29 11:35:04 functional-550377 crio[4240]: time="2025-09-29 11:35:04.380297669Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=58cf39b4-da5c-4cab-978f-46cfc1923aba name=/runtime.v1.ImageService/PullImage
	Sep 29 11:35:04 functional-550377 crio[4240]: time="2025-09-29 11:35:04.381095346Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5c8d6d2d-d5ea-46cb-beee-653866cec409 name=/runtime.v1.ImageService/PullImage
	Sep 29 11:37:50 functional-550377 crio[4240]: time="2025-09-29 11:37:50.379947086Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=19c395c3-8a3d-48a8-9ab1-1eebcbff249c name=/runtime.v1.ImageService/PullImage
	Sep 29 11:37:53 functional-550377 crio[4240]: time="2025-09-29 11:37:53.380196617Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=08ffd226-e2cf-48ae-ad51-a828824afbe0 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	92aaaf745c35b       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         9 minutes ago       Running             kubernetes-dashboard        0                   085816c4e93df       kubernetes-dashboard-855c9754f9-2wd82
	c80cec7913049       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   d137e77747f26       dashboard-metrics-scraper-77bf4d6c4c-4qztg
	9581410676a6b       docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285                  9 minutes ago       Running             myfrontend                  0                   9ebb2136c65b6       sp-pod
	31e327e060094       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              9 minutes ago       Exited              mount-munger                0                   5ce422848df2c       busybox-mount
	840f58ed44f2e       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                  10 minutes ago      Running             nginx                       0                   de4a50681bb9a       nginx-svc
	d7f087254bf92       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  10 minutes ago      Running             mysql                       0                   86d203ce4a549       mysql-5bb876957f-pvxvg
	3d9ca6cd74212       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                 10 minutes ago      Running             kube-apiserver              0                   dbf6ec9fd3309       kube-apiserver-functional-550377
	3ed9aec2e399a       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 10 minutes ago      Running             kube-controller-manager     1                   fdc2de435e6e1       kube-controller-manager-functional-550377
	3075633766f7c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   b201ba58fb72e       etcd-functional-550377
	1d6f439d1af5f       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 10 minutes ago      Running             kube-scheduler              1                   d30e1fd5248b5       kube-scheduler-functional-550377
	18ed13611ccf7       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 10 minutes ago      Running             kube-proxy                  1                   f7ba82427e036       kube-proxy-s6kfr
	cebfd5959524c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   e93a9ae2dbca0       coredns-66bc5c9577-c9944
	270fb0a095785       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   9a7eb03815da3       storage-provisioner
	f970d6bebbe37       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   b36665d1962dc       kindnet-j8rp7
	36cf700def00e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   e93a9ae2dbca0       coredns-66bc5c9577-c9944
	4c23f03a64169       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   9a7eb03815da3       storage-provisioner
	a331a7664ee0b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   b36665d1962dc       kindnet-j8rp7
	2849a96268fa2       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 12 minutes ago      Exited              kube-proxy                  0                   f7ba82427e036       kube-proxy-s6kfr
	3e25993266b96       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   b201ba58fb72e       etcd-functional-550377
	af161cf499f01       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 12 minutes ago      Exited              kube-scheduler              0                   d30e1fd5248b5       kube-scheduler-functional-550377
	58d68a080e75f       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 12 minutes ago      Exited              kube-controller-manager     0                   fdc2de435e6e1       kube-controller-manager-functional-550377
	
	
	==> coredns [36cf700def00e3131ab1a06cc525f3e60d7f5c506f141f196492955854ee5924] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57373 - 28431 "HINFO IN 3303712710856113591.6122979640762642032. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033449775s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cebfd5959524c2b387f477119e6667656fa58d4affdc61c77f47f0bf470efbc5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45568 - 55452 "HINFO IN 3379115138682965298.1413693299134697916. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.050350285s
	
	
	==> describe nodes <==
	Name:               functional-550377
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-550377
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=functional-550377
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_29_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:29:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-550377
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:41:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:39:43 +0000   Mon, 29 Sep 2025 11:29:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:39:43 +0000   Mon, 29 Sep 2025 11:29:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:39:43 +0000   Mon, 29 Sep 2025 11:29:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:39:43 +0000   Mon, 29 Sep 2025 11:30:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-550377
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863452Ki
	  pods:               110
	System Info:
	  Machine ID:                 c338740b3e8c43bc9f33458464c1e275
	  System UUID:                c4fa2d3c-785c-4903-b828-2b2082a1334b
	  Boot ID:                    c950b162-3ea4-4410-8c2e-1238f18b29b9
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-9ljvx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  default                     hello-node-connect-7d85dfc575-nzqfl           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-pvxvg                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 coredns-66bc5c9577-c9944                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-550377                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-j8rp7                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-550377              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-550377     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-s6kfr                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-550377              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-4qztg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-2wd82         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-550377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-550377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-550377 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-550377 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-550377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-550377 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-550377 event: Registered Node functional-550377 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-550377 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-550377 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-550377 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-550377 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-550377 event: Registered Node functional-550377 in Controller
	
	
	==> dmesg <==
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff b2 ca ec 61 42 22 08 06
	[  +4.588441] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 40 d2 b8 e9 db 08 06
	[Sep29 11:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +1.000205] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +1.024911] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +1.022908] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +1.023945] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +1.023904] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +2.047860] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +4.032732] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[  +8.190439] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[ +16.382949] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	[Sep29 11:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: 92 1b c4 37 9d 74 b6 26 5a 9a 38 ae 08 00
	
	
	==> etcd [3075633766f7cfdf7e7e522458b175769236631ae1c91dc0c721b972a7701b1c] <==
	{"level":"warn","ts":"2025-09-29T11:31:23.217230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.223236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.229475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.235703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.241896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.249537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.258164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.264993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.270894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.276849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.282861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.289157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.295596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.301581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.307618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.313732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.319911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.325758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.341203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.347226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.353262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:31:23.401880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56056","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:41:22.932991Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1162}
	{"level":"info","ts":"2025-09-29T11:41:22.951617Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1162,"took":"18.249231ms","hash":1726500919,"current-db-size-bytes":3436544,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-09-29T11:41:22.951663Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1726500919,"revision":1162,"compact-revision":-1}
	
	
	==> etcd [3e25993266b96daac264f784ec1c6b241a3b5d064ff38fcfb2d3b1cd44fe48d8] <==
	{"level":"warn","ts":"2025-09-29T11:29:44.987131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:29:44.993592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:29:44.999637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:29:45.012990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:29:45.019054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:29:45.024696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:29:45.072851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37264","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:31:02.060406Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:31:02.060522Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-550377","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T11:31:02.060632Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:31:09.061645Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:31:09.061767Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:31:09.061817Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-29T11:31:09.061846Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T11:31:09.061909Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T11:31:09.061918Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T11:31:09.061928Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-09-29T11:31:09.061936Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T11:31:09.061844Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:31:09.063078Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:31:09.063104Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:31:09.064350Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T11:31:09.064404Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:31:09.064425Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T11:31:09.064430Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-550377","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:42:00 up  4:24,  0 users,  load average: 0.00, 0.17, 4.96
	Linux functional-550377 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [a331a7664ee0bcae0ed367686274bd0f01abdc04bdb4be4dfb6bce7fe7cc0b45] <==
	I0929 11:29:54.111608       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 11:29:54.111875       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 11:29:54.112012       1 main.go:148] setting mtu 1500 for CNI 
	I0929 11:29:54.112026       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 11:29:54.112045       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T11:29:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 11:29:54.310667       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 11:29:54.310741       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 11:29:54.310758       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 11:29:54.311213       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0929 11:30:24.311171       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0929 11:30:24.312220       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0929 11:30:24.312228       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0929 11:30:24.312304       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0929 11:30:25.911919       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 11:30:25.911947       1 metrics.go:72] Registering metrics
	I0929 11:30:25.912277       1 controller.go:711] "Syncing nftables rules"
	I0929 11:30:34.317083       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:30:34.317137       1 main.go:301] handling current node
	I0929 11:30:44.317078       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:30:44.317123       1 main.go:301] handling current node
	I0929 11:30:54.314243       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:30:54.314288       1 main.go:301] handling current node
	
	
	==> kindnet [f970d6bebbe374c1f8d3f82706860df22512d2a41379924be5813691eaef50a5] <==
	I0929 11:39:52.375113       1 main.go:301] handling current node
	I0929 11:40:02.377743       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:40:02.377779       1 main.go:301] handling current node
	I0929 11:40:12.373040       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:40:12.373096       1 main.go:301] handling current node
	I0929 11:40:22.377177       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:40:22.377218       1 main.go:301] handling current node
	I0929 11:40:32.372191       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:40:32.372505       1 main.go:301] handling current node
	I0929 11:40:42.371598       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:40:42.371632       1 main.go:301] handling current node
	I0929 11:40:52.373530       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:40:52.373565       1 main.go:301] handling current node
	I0929 11:41:02.372936       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:41:02.372991       1 main.go:301] handling current node
	I0929 11:41:12.373087       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:41:12.373131       1 main.go:301] handling current node
	I0929 11:41:22.377381       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:41:22.377418       1 main.go:301] handling current node
	I0929 11:41:32.372246       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:41:32.372289       1 main.go:301] handling current node
	I0929 11:41:42.377046       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:41:42.377088       1 main.go:301] handling current node
	I0929 11:41:52.378226       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 11:41:52.378262       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3d9ca6cd7421207cc7b18311f80f335fefabcdcad54e62de2ad17ddf1e744827] <==
	E0929 11:32:02.391062       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49990: use of closed network connection
	E0929 11:32:03.219883       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50010: use of closed network connection
	E0929 11:32:04.261273       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:50038: use of closed network connection
	I0929 11:32:04.413374       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.60.212"}
	E0929 11:32:13.366341       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37272: use of closed network connection
	I0929 11:32:21.004531       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 11:32:21.093679       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.165.157"}
	I0929 11:32:21.105098       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.15.248"}
	E0929 11:32:21.808255       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:36690: use of closed network connection
	I0929 11:32:39.663495       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:32:43.009271       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:34:03.374375       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:34:06.534054       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:35:23.338935       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:35:24.402334       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:36:31.439617       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:36:50.500405       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:37:38.765571       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:38:07.747840       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:38:45.859667       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:39:15.251585       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:39:49.414082       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:40:40.450235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:41:14.008660       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:41:23.792421       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [3ed9aec2e399a9a35d81ae49eecc043bd559e7b0421fa0e258b09220dbab751b] <==
	I0929 11:31:27.191719       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 11:31:27.194013       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 11:31:27.197259       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 11:31:27.197292       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 11:31:27.197377       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:31:27.197635       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 11:31:27.197791       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 11:31:27.197792       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 11:31:27.197946       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 11:31:27.197956       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-550377"
	I0929 11:31:27.198051       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:31:27.200041       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 11:31:27.200047       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 11:31:27.202912       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:31:27.203997       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 11:31:27.206235       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 11:31:27.208403       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 11:31:27.210623       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 11:31:27.219952       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 11:32:21.045688       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:32:21.050013       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:32:21.052354       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:32:21.054304       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:32:21.055892       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:32:21.060577       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [58d68a080e75f073057b5eac7eaa2cfce893d266b27066338455a23127da5ba2] <==
	I0929 11:29:52.465311       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 11:29:52.465381       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 11:29:52.465411       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 11:29:52.465567       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 11:29:52.465611       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 11:29:52.465726       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 11:29:52.465744       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 11:29:52.465788       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 11:29:52.465850       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 11:29:52.466096       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 11:29:52.467310       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 11:29:52.467903       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 11:29:52.467907       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 11:29:52.469517       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 11:29:52.469571       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 11:29:52.469623       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 11:29:52.469630       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 11:29:52.469635       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 11:29:52.469792       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:29:52.473779       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 11:29:52.475044       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:29:52.476405       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-550377" podCIDRs=["10.244.0.0/24"]
	I0929 11:29:52.479648       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 11:29:52.489972       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 11:30:37.420714       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [18ed13611ccf7105da60fca3c5dbd01ecd1ee3db33b0dcef20b42adeea32b71e] <==
	I0929 11:31:03.173286       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:31:03.274023       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:31:03.274068       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 11:31:03.274195       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:31:03.294520       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 11:31:03.294571       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:31:03.300713       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:31:03.301237       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:31:03.301276       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:31:03.302746       1 config.go:200] "Starting service config controller"
	I0929 11:31:03.302785       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:31:03.302825       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:31:03.302845       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:31:03.302874       1 config.go:309] "Starting node config controller"
	I0929 11:31:03.302885       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:31:03.302891       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:31:03.302879       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:31:03.302900       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:31:03.402917       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:31:03.402996       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:31:03.403100       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	E0929 11:31:23.806243       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E0929 11:31:23.806682       1 reflector.go:205] "Failed to watch" err="nodes \"functional-550377\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:31:23.806808       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E0929 11:31:23.806779       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kube-proxy [2849a96268fa2b2402abd7934d23cd86cb6fb4ea1cb9c330fd874e6090fb1fbb] <==
	I0929 11:29:54.002996       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:29:54.065685       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:29:54.166727       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:29:54.166781       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 11:29:54.166860       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:29:54.184851       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 11:29:54.184907       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:29:54.189798       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:29:54.190492       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:29:54.190519       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:29:54.192464       1 config.go:200] "Starting service config controller"
	I0929 11:29:54.192485       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:29:54.192538       1 config.go:309] "Starting node config controller"
	I0929 11:29:54.192544       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:29:54.192551       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:29:54.192559       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:29:54.192558       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:29:54.192565       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:29:54.192570       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:29:54.295047       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 11:29:54.295180       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:29:54.295200       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1d6f439d1af5f86be8179b870d20a9defb564e6288e7eddcdeb5657f3b831790] <==
	I0929 11:31:11.105070       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:31:11.105081       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0929 11:31:11.105089       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I0929 11:31:11.105515       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:31:11.105582       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:31:11.206005       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I0929 11:31:11.206046       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:31:11.206139       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E0929 11:31:23.786222       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:31:23.786426       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:31:23.786479       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:31:23.786541       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 11:31:23.786578       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 11:31:23.786624       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:31:23.786709       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 11:31:23.786746       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:31:23.786762       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:31:23.786817       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:31:23.786833       1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:31:23.786849       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:31:23.786868       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:31:23.786891       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 11:31:23.786915       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:31:23.787051       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 11:31:23.788778       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	
	
	==> kube-scheduler [af161cf499f015b0f5d16c21dded5cf0b8860db5e69f2fd66f43144ff93cbd2d] <==
	E0929 11:29:46.239092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 11:29:46.239806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 11:29:46.239845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 11:29:46.240477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 11:29:46.240660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 11:29:46.240514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 11:29:46.240467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 11:29:46.241081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 11:29:46.240527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 11:29:46.241324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 11:29:46.241383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 11:29:46.241309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 11:29:46.240704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 11:29:46.240692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 11:29:46.240698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 11:29:46.241656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 11:29:46.241714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 11:29:46.241758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I0929 11:29:47.737872       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:31:09.127094       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 11:31:09.127114       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:31:09.127203       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 11:31:09.127226       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 11:31:09.127258       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 11:31:09.127286       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 11:40:47 functional-550377 kubelet[5411]: E0929 11:40:47.380137    5411 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-9ljvx" podUID="1dc0f596-cc9f-45be-87c1-99716f947d55"
	Sep 29 11:40:51 functional-550377 kubelet[5411]: E0929 11:40:51.485547    5411 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146051485355568  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:40:51 functional-550377 kubelet[5411]: E0929 11:40:51.485579    5411 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146051485355568  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:41:01 functional-550377 kubelet[5411]: E0929 11:41:01.379896    5411 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-nzqfl" podUID="440cc81d-e888-42b6-9f9d-29e01eb75600"
	Sep 29 11:41:01 functional-550377 kubelet[5411]: E0929 11:41:01.487870    5411 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146061487637923  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:41:01 functional-550377 kubelet[5411]: E0929 11:41:01.487902    5411 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146061487637923  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:41:02 functional-550377 kubelet[5411]: E0929 11:41:02.379782    5411 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-9ljvx" podUID="1dc0f596-cc9f-45be-87c1-99716f947d55"
	Sep 29 11:41:11 functional-550377 kubelet[5411]: E0929 11:41:11.489739    5411 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146071489556554  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:41:11 functional-550377 kubelet[5411]: E0929 11:41:11.489769    5411 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146071489556554  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:41:15 functional-550377 kubelet[5411]: E0929 11:41:15.378958    5411 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-nzqfl" podUID="440cc81d-e888-42b6-9f9d-29e01eb75600"
	Sep 29 11:41:15 functional-550377 kubelet[5411]: E0929 11:41:15.378985    5411 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-9ljvx" podUID="1dc0f596-cc9f-45be-87c1-99716f947d55"
	Sep 29 11:41:21 functional-550377 kubelet[5411]: E0929 11:41:21.491402    5411 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146081491199036  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:41:21 functional-550377 kubelet[5411]: E0929 11:41:21.491432    5411 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146081491199036  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:41:26 functional-550377 kubelet[5411]: E0929 11:41:26.379883    5411 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-9ljvx" podUID="1dc0f596-cc9f-45be-87c1-99716f947d55"
	Sep 29 11:41:30 functional-550377 kubelet[5411]: E0929 11:41:30.379741    5411 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-nzqfl" podUID="440cc81d-e888-42b6-9f9d-29e01eb75600"
	Sep 29 11:41:31 functional-550377 kubelet[5411]: E0929 11:41:31.493270    5411 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146091493085148  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:41:31 functional-550377 kubelet[5411]: E0929 11:41:31.493299    5411 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146091493085148  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:41:39 functional-550377 kubelet[5411]: E0929 11:41:39.380025    5411 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-9ljvx" podUID="1dc0f596-cc9f-45be-87c1-99716f947d55"
	Sep 29 11:41:41 functional-550377 kubelet[5411]: E0929 11:41:41.494760    5411 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146101494566902  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:41:41 functional-550377 kubelet[5411]: E0929 11:41:41.494791    5411 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146101494566902  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:41:44 functional-550377 kubelet[5411]: E0929 11:41:44.379416    5411 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-nzqfl" podUID="440cc81d-e888-42b6-9f9d-29e01eb75600"
	Sep 29 11:41:51 functional-550377 kubelet[5411]: E0929 11:41:51.496386    5411 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759146111496167028  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:41:51 functional-550377 kubelet[5411]: E0929 11:41:51.496427    5411 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759146111496167028  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303432}  inodes_used:{value:134}}"
	Sep 29 11:41:52 functional-550377 kubelet[5411]: E0929 11:41:52.379847    5411 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-9ljvx" podUID="1dc0f596-cc9f-45be-87c1-99716f947d55"
	Sep 29 11:41:59 functional-550377 kubelet[5411]: E0929 11:41:59.379111    5411 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-nzqfl" podUID="440cc81d-e888-42b6-9f9d-29e01eb75600"
	
	
	==> kubernetes-dashboard [92aaaf745c35b1a4faa5e3e29fa277e90f665cc91904d877e082866f18d3cb97] <==
	2025/09/29 11:32:30 Starting overwatch
	2025/09/29 11:32:30 Using namespace: kubernetes-dashboard
	2025/09/29 11:32:30 Using in-cluster config to connect to apiserver
	2025/09/29 11:32:30 Using secret token for csrf signing
	2025/09/29 11:32:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/29 11:32:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/29 11:32:30 Successful initial request to the apiserver, version: v1.34.0
	2025/09/29 11:32:30 Generating JWE encryption key
	2025/09/29 11:32:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/29 11:32:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/29 11:32:30 Initializing JWE encryption key from synchronized object
	2025/09/29 11:32:30 Creating in-cluster Sidecar client
	2025/09/29 11:32:30 Successful request to sidecar
	2025/09/29 11:32:30 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [270fb0a0957858d76fdddbd720c9977bf4ff37f6ef0a391f4026c29abb4ed6ee] <==
	W0929 11:41:34.803820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:36.806806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:36.810845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:38.813904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:38.817825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:40.820908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:40.825447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:42.828179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:42.832021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:44.835008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:44.838703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:46.842284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:46.846752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:48.849734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:48.854659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:50.857462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:50.861203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:52.864241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:52.868071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:54.870800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:54.875394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:56.878878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:56.882885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:58.886020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:41:58.890516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4c23f03a641699c169fee6e8dd0091f180d48cb03f0625c713ec1507b9da4cc9] <==
	W0929 11:30:37.101715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:39.104551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:39.108335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:41.111771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:41.116449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:43.119840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:43.123441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:45.127288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:45.131064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:47.133862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:47.137764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:49.140479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:49.143795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:51.147801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:51.152431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:53.156126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:53.161456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:55.165363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:55.169409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:57.172452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:57.178122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:59.181384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:59.185090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:31:01.188412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:31:01.193363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-550377 -n functional-550377
helpers_test.go:269: (dbg) Run:  kubectl --context functional-550377 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-9ljvx hello-node-connect-7d85dfc575-nzqfl
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-550377 describe pod busybox-mount hello-node-75c85bcc94-9ljvx hello-node-connect-7d85dfc575-nzqfl
helpers_test.go:290: (dbg) kubectl --context functional-550377 describe pod busybox-mount hello-node-75c85bcc94-9ljvx hello-node-connect-7d85dfc575-nzqfl:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-550377/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:32:09 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://31e327e06009459221f93788d0a2a115a3766c20b7638a20a7f8faa268531a66
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 11:32:13 +0000
	      Finished:     Mon, 29 Sep 2025 11:32:13 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8t6gm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-8t6gm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-550377
	  Normal  Pulling    9m52s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m48s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.096s (3.096s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m48s  kubelet            Created container: mount-munger
	  Normal  Started    9m48s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-9ljvx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-550377/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:32:04 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmwtr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-gmwtr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m57s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9ljvx to functional-550377
	  Normal   Pulling    6m57s (x5 over 9m56s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m57s (x5 over 9m56s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m57s (x5 over 9m56s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m48s (x21 over 9m55s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m48s (x21 over 9m55s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-nzqfl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-550377/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:31:58 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lwq5f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lwq5f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-nzqfl to functional-550377
	  Normal   Pulling    6m57s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m57s (x5 over 9m56s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m57s (x5 over 9m56s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m46s (x21 over 9m55s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    2s (x42 over 9m55s)     kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-550377 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-550377 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-9ljvx" [1dc0f596-cc9f-45be-87c1-99716f947d55] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-550377 -n functional-550377
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-29 11:42:04.719592759 +0000 UTC m=+1194.236155691
functional_test.go:1460: (dbg) Run:  kubectl --context functional-550377 describe po hello-node-75c85bcc94-9ljvx -n default
functional_test.go:1460: (dbg) kubectl --context functional-550377 describe po hello-node-75c85bcc94-9ljvx -n default:
Name:             hello-node-75c85bcc94-9ljvx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-550377/192.168.49.2
Start Time:       Mon, 29 Sep 2025 11:32:04 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gmwtr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-gmwtr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9ljvx to functional-550377
Normal   Pulling    7m (x5 over 9m59s)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m (x5 over 9m59s)      kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m (x5 over 9m59s)      kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-550377 logs hello-node-75c85bcc94-9ljvx -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-550377 logs hello-node-75c85bcc94-9ljvx -n default: exit status 1 (62.726561ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-9ljvx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-550377 logs hello-node-75c85bcc94-9ljvx -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-550377 service --namespace=default --https --url hello-node: exit status 115 (527.749755ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31217
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-550377 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-550377 service hello-node --url --format={{.IP}}: exit status 115 (528.211025ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-550377 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-550377 service hello-node --url: exit status 115 (523.914468ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31217
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-550377 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31217
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    

Test pass (299/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 14.19
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 13.61
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.23
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.18
21 TestBinaryMirror 1.19
22 TestOffline 82.89
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.16
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.16
27 TestAddons/Setup 162.38
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 11.47
35 TestAddons/parallel/Registry 16.62
36 TestAddons/parallel/RegistryCreds 0.62
38 TestAddons/parallel/InspektorGadget 6.27
39 TestAddons/parallel/MetricsServer 5.66
41 TestAddons/parallel/CSI 57.63
42 TestAddons/parallel/Headlamp 17.56
43 TestAddons/parallel/CloudSpanner 5.49
44 TestAddons/parallel/LocalPath 14.14
45 TestAddons/parallel/NvidiaDevicePlugin 6.48
46 TestAddons/parallel/Yakd 10.9
47 TestAddons/parallel/AmdGpuDevicePlugin 6.52
48 TestAddons/StoppedEnableDisable 16.47
49 TestCertOptions 28.3
50 TestCertExpiration 216.72
52 TestForceSystemdFlag 25.44
53 TestForceSystemdEnv 36.9
55 TestKVMDriverInstallOrUpdate 0.88
59 TestErrorSpam/setup 19.22
60 TestErrorSpam/start 0.61
61 TestErrorSpam/status 0.91
62 TestErrorSpam/pause 1.44
63 TestErrorSpam/unpause 1.48
64 TestErrorSpam/stop 2.49
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 68.16
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.35
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.1
76 TestFunctional/serial/CacheCmd/cache/add_local 2.14
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 46.16
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.42
87 TestFunctional/serial/LogsFileCmd 1.42
88 TestFunctional/serial/InvalidService 4.24
90 TestFunctional/parallel/ConfigCmd 0.35
91 TestFunctional/parallel/DashboardCmd 14.11
92 TestFunctional/parallel/DryRun 0.35
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 0.91
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 33.96
102 TestFunctional/parallel/SSHCmd 0.58
103 TestFunctional/parallel/CpCmd 1.73
104 TestFunctional/parallel/MySQL 18.18
105 TestFunctional/parallel/FileSync 0.29
106 TestFunctional/parallel/CertSync 1.71
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
114 TestFunctional/parallel/License 0.37
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
119 TestFunctional/parallel/ImageCommands/ImageBuild 3.67
120 TestFunctional/parallel/ImageCommands/Setup 1.99
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.33
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
129 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.21
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.98
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.64
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
144 TestFunctional/parallel/ProfileCmd/profile_list 0.37
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
146 TestFunctional/parallel/MountCmd/any-port 7.78
147 TestFunctional/parallel/MountCmd/specific-port 1.6
148 TestFunctional/parallel/MountCmd/VerifyCleanup 1.61
149 TestFunctional/parallel/Version/short 0.05
150 TestFunctional/parallel/Version/components 0.49
151 TestFunctional/parallel/ServiceCmd/List 1.69
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 176.96
164 TestMultiControlPlane/serial/DeployApp 6.78
165 TestMultiControlPlane/serial/PingHostFromPods 1.15
166 TestMultiControlPlane/serial/AddWorkerNode 55.28
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
169 TestMultiControlPlane/serial/CopyFile 16.51
170 TestMultiControlPlane/serial/StopSecondaryNode 14.29
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
172 TestMultiControlPlane/serial/RestartSecondaryNode 9.06
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 108.51
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.42
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.71
177 TestMultiControlPlane/serial/StopCluster 48.82
178 TestMultiControlPlane/serial/RestartCluster 57.97
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
180 TestMultiControlPlane/serial/AddSecondaryNode 35.64
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
185 TestJSONOutput/start/Command 68.36
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.63
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.62
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.09
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.2
210 TestKicCustomNetwork/create_custom_network 36.65
211 TestKicCustomNetwork/use_default_bridge_network 25.44
212 TestKicExistingNetwork 24.16
213 TestKicCustomSubnet 24.63
214 TestKicStaticIP 25.95
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 48.25
219 TestMountStart/serial/StartWithMountFirst 6.71
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 6.41
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.67
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.19
226 TestMountStart/serial/RestartStopped 7.73
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 92.4
231 TestMultiNode/serial/DeployApp2Nodes 5.69
232 TestMultiNode/serial/PingHostFrom2Pods 0.8
233 TestMultiNode/serial/AddNode 24.63
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.67
236 TestMultiNode/serial/CopyFile 9.8
237 TestMultiNode/serial/StopNode 2.19
238 TestMultiNode/serial/StartAfterStop 7.5
239 TestMultiNode/serial/RestartKeepsNodes 80.59
240 TestMultiNode/serial/DeleteNode 5.38
241 TestMultiNode/serial/StopMultiNode 30.62
242 TestMultiNode/serial/RestartMultiNode 50.18
243 TestMultiNode/serial/ValidateNameConflict 26.14
248 TestPreload 119.62
250 TestScheduledStopUnix 98.88
253 TestInsufficientStorage 10.1
254 TestRunningBinaryUpgrade 51.19
256 TestKubernetesUpgrade 295.72
257 TestMissingContainerUpgrade 97.14
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
260 TestNoKubernetes/serial/StartWithK8s 38.63
261 TestNoKubernetes/serial/StartWithStopK8s 25.09
262 TestNoKubernetes/serial/Start 7.45
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
264 TestNoKubernetes/serial/ProfileList 1.53
265 TestNoKubernetes/serial/Stop 1.21
266 TestNoKubernetes/serial/StartNoArgs 6.79
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
268 TestStoppedBinaryUpgrade/Setup 2.62
269 TestStoppedBinaryUpgrade/Upgrade 40.05
278 TestPause/serial/Start 44.77
279 TestStoppedBinaryUpgrade/MinikubeLogs 1.03
287 TestNetworkPlugins/group/false 3.37
291 TestPause/serial/SecondStartNoReconfiguration 8.6
292 TestPause/serial/Pause 0.65
293 TestPause/serial/VerifyStatus 0.31
294 TestPause/serial/Unpause 0.63
295 TestPause/serial/PauseAgain 0.67
296 TestPause/serial/DeletePaused 2.81
297 TestPause/serial/VerifyDeletedResources 4.43
299 TestStartStop/group/old-k8s-version/serial/FirstStart 50.83
301 TestStartStop/group/no-preload/serial/FirstStart 57.48
302 TestStartStop/group/old-k8s-version/serial/DeployApp 11.33
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
304 TestStartStop/group/old-k8s-version/serial/Stop 16.17
305 TestStartStop/group/no-preload/serial/DeployApp 9.26
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
307 TestStartStop/group/old-k8s-version/serial/SecondStart 45.97
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.94
309 TestStartStop/group/no-preload/serial/Stop 18.44
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
311 TestStartStop/group/no-preload/serial/SecondStart 44.69
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
314 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
315 TestStartStop/group/old-k8s-version/serial/Pause 2.69
317 TestStartStop/group/embed-certs/serial/FirstStart 72.45
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
321 TestStartStop/group/no-preload/serial/Pause 2.88
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.79
325 TestStartStop/group/newest-cni/serial/FirstStart 29.14
326 TestNetworkPlugins/group/auto/Start 41.08
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
329 TestStartStop/group/newest-cni/serial/Stop 12.47
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
331 TestStartStop/group/newest-cni/serial/SecondStart 11.37
332 TestStartStop/group/embed-certs/serial/DeployApp 9.3
333 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
334 TestNetworkPlugins/group/auto/KubeletFlags 0.35
335 TestStartStop/group/embed-certs/serial/Stop 18.18
336 TestNetworkPlugins/group/auto/NetCatPod 8.24
337 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
339 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
340 TestStartStop/group/newest-cni/serial/Pause 2.83
341 TestNetworkPlugins/group/kindnet/Start 71.54
342 TestNetworkPlugins/group/auto/DNS 0.17
343 TestNetworkPlugins/group/auto/Localhost 0.12
344 TestNetworkPlugins/group/auto/HairPin 0.12
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.45
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
347 TestStartStop/group/embed-certs/serial/SecondStart 44.56
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
349 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.58
350 TestNetworkPlugins/group/calico/Start 54.97
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
352 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 46.24
353 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
354 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
355 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
356 TestStartStop/group/embed-certs/serial/Pause 2.92
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/custom-flannel/Start 56.42
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
361 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
362 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.28
364 TestNetworkPlugins/group/calico/NetCatPod 8.22
365 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
366 TestNetworkPlugins/group/kindnet/DNS 0.14
367 TestNetworkPlugins/group/kindnet/Localhost 0.12
368 TestNetworkPlugins/group/kindnet/HairPin 0.13
369 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
370 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.98
371 TestNetworkPlugins/group/calico/DNS 0.19
372 TestNetworkPlugins/group/calico/Localhost 0.15
373 TestNetworkPlugins/group/calico/HairPin 0.15
374 TestNetworkPlugins/group/enable-default-cni/Start 69.88
375 TestNetworkPlugins/group/flannel/Start 59.79
376 TestNetworkPlugins/group/bridge/Start 62.61
377 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
378 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.22
379 TestNetworkPlugins/group/custom-flannel/DNS 0.15
380 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
381 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
384 TestNetworkPlugins/group/flannel/ControllerPod 6.01
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
386 TestNetworkPlugins/group/flannel/NetCatPod 8.18
387 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
388 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
389 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
390 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
391 TestNetworkPlugins/group/bridge/NetCatPod 9.18
392 TestNetworkPlugins/group/flannel/DNS 0.14
393 TestNetworkPlugins/group/flannel/Localhost 0.12
394 TestNetworkPlugins/group/flannel/HairPin 0.13
395 TestNetworkPlugins/group/bridge/DNS 0.14
396 TestNetworkPlugins/group/bridge/Localhost 0.12
397 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (14.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-989561 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-989561 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.192948143s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (14.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 11:22:24.711457  747468 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0929 11:22:24.711586  747468 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-743952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-989561
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-989561: exit status 85 (60.257406ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-989561 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-989561 │ jenkins │ v1.37.0 │ 29 Sep 25 11:22 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:22:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:22:10.560364  747480 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:22:10.560611  747480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:22:10.560619  747480 out.go:374] Setting ErrFile to fd 2...
	I0929 11:22:10.560623  747480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:22:10.560845  747480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
	W0929 11:22:10.560993  747480 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21655-743952/.minikube/config/config.json: open /home/jenkins/minikube-integration/21655-743952/.minikube/config/config.json: no such file or directory
	I0929 11:22:10.561451  747480 out.go:368] Setting JSON to true
	I0929 11:22:10.562341  747480 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14668,"bootTime":1759130263,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:22:10.562428  747480 start.go:140] virtualization: kvm guest
	I0929 11:22:10.564340  747480 out.go:99] [download-only-989561] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0929 11:22:10.564483  747480 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21655-743952/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 11:22:10.564533  747480 notify.go:220] Checking for updates...
	I0929 11:22:10.565652  747480 out.go:171] MINIKUBE_LOCATION=21655
	I0929 11:22:10.566810  747480 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:22:10.567833  747480 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig
	I0929 11:22:10.568827  747480 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube
	I0929 11:22:10.569703  747480 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 11:22:10.571593  747480 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 11:22:10.571804  747480 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:22:10.594659  747480 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:22:10.594783  747480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:22:10.649831  747480 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 11:22:10.640038467 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:22:10.649938  747480 docker.go:318] overlay module found
	I0929 11:22:10.651444  747480 out.go:99] Using the docker driver based on user configuration
	I0929 11:22:10.651475  747480 start.go:304] selected driver: docker
	I0929 11:22:10.651482  747480 start.go:924] validating driver "docker" against <nil>
	I0929 11:22:10.651608  747480 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:22:10.705490  747480 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 11:22:10.695810569 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:22:10.705699  747480 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:22:10.706421  747480 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 11:22:10.706625  747480 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 11:22:10.708212  747480 out.go:171] Using Docker driver with root privileges
	I0929 11:22:10.709103  747480 cni.go:84] Creating CNI manager for ""
	I0929 11:22:10.709170  747480 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:22:10.709181  747480 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 11:22:10.709252  747480 start.go:348] cluster config:
	{Name:download-only-989561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-989561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:22:10.710435  747480 out.go:99] Starting "download-only-989561" primary control-plane node in "download-only-989561" cluster
	I0929 11:22:10.710467  747480 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 11:22:10.711462  747480 out.go:99] Pulling base image v0.0.48 ...
	I0929 11:22:10.711486  747480 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 11:22:10.711591  747480 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 11:22:10.728606  747480 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 11:22:10.729332  747480 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 11:22:10.729436  747480 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 11:22:10.816055  747480 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:22:10.816090  747480 cache.go:58] Caching tarball of preloaded images
	I0929 11:22:10.816287  747480 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0929 11:22:10.818038  747480 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0929 11:22:10.818063  747480 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 11:22:10.928713  747480 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21655-743952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-989561 host does not exist
	  To start a cluster, run: "minikube start -p download-only-989561"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-989561
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (13.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-791399 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-791399 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.612199899s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (13.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 11:22:38.722421  747468 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0929 11:22:38.722469  747468 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-743952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-791399
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-791399: exit status 85 (65.600189ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-989561 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-989561 │ jenkins │ v1.37.0 │ 29 Sep 25 11:22 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 11:22 UTC │ 29 Sep 25 11:22 UTC │
	│ delete  │ -p download-only-989561                                                                                                                                                   │ download-only-989561 │ jenkins │ v1.37.0 │ 29 Sep 25 11:22 UTC │ 29 Sep 25 11:22 UTC │
	│ start   │ -o=json --download-only -p download-only-791399 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-791399 │ jenkins │ v1.37.0 │ 29 Sep 25 11:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:22:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:22:25.151749  747861 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:22:25.151859  747861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:22:25.151868  747861 out.go:374] Setting ErrFile to fd 2...
	I0929 11:22:25.151871  747861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:22:25.152103  747861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
	I0929 11:22:25.152639  747861 out.go:368] Setting JSON to true
	I0929 11:22:25.153525  747861 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14682,"bootTime":1759130263,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:22:25.153609  747861 start.go:140] virtualization: kvm guest
	I0929 11:22:25.155323  747861 out.go:99] [download-only-791399] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:22:25.155490  747861 notify.go:220] Checking for updates...
	I0929 11:22:25.156594  747861 out.go:171] MINIKUBE_LOCATION=21655
	I0929 11:22:25.157693  747861 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:22:25.158781  747861 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig
	I0929 11:22:25.159868  747861 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube
	I0929 11:22:25.160854  747861 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 11:22:25.162774  747861 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 11:22:25.163036  747861 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:22:25.185995  747861 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:22:25.186064  747861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:22:25.239914  747861 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 11:22:25.230482122 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:22:25.240056  747861 docker.go:318] overlay module found
	I0929 11:22:25.241501  747861 out.go:99] Using the docker driver based on user configuration
	I0929 11:22:25.241535  747861 start.go:304] selected driver: docker
	I0929 11:22:25.241542  747861 start.go:924] validating driver "docker" against <nil>
	I0929 11:22:25.241621  747861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:22:25.293679  747861 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 11:22:25.2837899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:22:25.293903  747861 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:22:25.294463  747861 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 11:22:25.294637  747861 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 11:22:25.296302  747861 out.go:171] Using Docker driver with root privileges
	I0929 11:22:25.297325  747861 cni.go:84] Creating CNI manager for ""
	I0929 11:22:25.297384  747861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 11:22:25.297394  747861 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 11:22:25.297454  747861 start.go:348] cluster config:
	{Name:download-only-791399 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-791399 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:22:25.298441  747861 out.go:99] Starting "download-only-791399" primary control-plane node in "download-only-791399" cluster
	I0929 11:22:25.298466  747861 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 11:22:25.299387  747861 out.go:99] Pulling base image v0.0.48 ...
	I0929 11:22:25.299409  747861 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:22:25.299519  747861 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 11:22:25.316936  747861 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 11:22:25.317081  747861 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 11:22:25.317100  747861 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 11:22:25.317105  747861 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 11:22:25.317112  747861 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 11:22:25.404173  747861 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 11:22:25.404216  747861 cache.go:58] Caching tarball of preloaded images
	I0929 11:22:25.404401  747861 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 11:22:25.406063  747861 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0929 11:22:25.406083  747861 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 11:22:25.515720  747861 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21655-743952/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-791399 host does not exist
	  To start a cluster, run: "minikube start -p download-only-791399"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-791399
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.18s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-059515 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-059515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-059515
--- PASS: TestDownloadOnlyKic (1.18s)

                                                
                                    
x
+
TestBinaryMirror (1.19s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 11:22:40.608251  747468 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-685633 --alsologtostderr --binary-mirror http://127.0.0.1:46621 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-685633" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-685633
--- PASS: TestBinaryMirror (1.19s)

                                                
                                    
x
+
TestOffline (82.89s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-652368 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-652368 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.296578449s)
helpers_test.go:175: Cleaning up "offline-crio-652368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-652368
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-652368: (2.590115591s)
--- PASS: TestOffline (82.89s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.16s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-164332
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-164332: exit status 85 (155.023341ms)

                                                
                                                
-- stdout --
	* Profile "addons-164332" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-164332"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.16s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.16s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-164332
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-164332: exit status 85 (155.5342ms)

                                                
                                                
-- stdout --
	* Profile "addons-164332" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-164332"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.16s)

                                                
                                    
x
+
TestAddons/Setup (162.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-164332 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-164332 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m42.376776746s)
--- PASS: TestAddons/Setup (162.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-164332 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-164332 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-164332 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-164332 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [103f9a84-b941-4cdd-a010-3d74e06147c3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [103f9a84-b941-4cdd-a010-3d74e06147c3] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.00370866s
addons_test.go:694: (dbg) Run:  kubectl --context addons-164332 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-164332 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-164332 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 11.851071ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-cmshl" [5677ae02-8e21-4448-b64a-9eb03b4d372f] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002973663s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-lrxjf" [abfc2247-51ab-4a89-a23b-3eb3f7ebd7f6] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004059328s
addons_test.go:392: (dbg) Run:  kubectl --context addons-164332 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-164332 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-164332 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.795925583s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 ip
2025/09/29 11:26:00 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.62s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.62s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.942016ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-164332
addons_test.go:332: (dbg) Run:  kubectl --context addons-164332 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-zb8f7" [9c07c022-c7c7-4c43-9319-a48ad12f5d5f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003476028s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 11.644121ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-8br9j" [8e2ab083-272f-4c43-9dcf-cf2726a7560d] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003092888s
addons_test.go:463: (dbg) Run:  kubectl --context addons-164332 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.66s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 11:25:56.706313  747468 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0929 11:25:56.710001  747468 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 11:25:56.710042  747468 kapi.go:107] duration metric: took 3.747862ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.761229ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-164332 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-164332 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [7a21b42e-90f1-4286-b574-c89e8887bf5c] Pending
helpers_test.go:352: "task-pv-pod" [7a21b42e-90f1-4286-b574-c89e8887bf5c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [7a21b42e-90f1-4286-b574-c89e8887bf5c] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004304506s
addons_test.go:572: (dbg) Run:  kubectl --context addons-164332 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-164332 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-164332 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-164332 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-164332 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-164332 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-164332 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [6d0055fd-e2f6-44fe-ab71-88223aaed574] Pending
helpers_test.go:352: "task-pv-pod-restore" [6d0055fd-e2f6-44fe-ab71-88223aaed574] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [6d0055fd-e2f6-44fe-ab71-88223aaed574] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00413711s
addons_test.go:614: (dbg) Run:  kubectl --context addons-164332 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-164332 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-164332 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-164332 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.538027515s)
--- PASS: TestAddons/parallel/CSI (57.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-164332 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-whkmf" [c0ba2c67-a72e-4505-b0bb-36641d51cda7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-whkmf" [c0ba2c67-a72e-4505-b0bb-36641d51cda7] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00367914s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-164332 addons disable headlamp --alsologtostderr -v=1: (5.817487886s)
--- PASS: TestAddons/parallel/Headlamp (17.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-k6rzx" [dfa16563-8ffc-4bcf-81da-ff5b3453ccd1] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00289223s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (14.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-164332 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-164332 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-164332 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [5d973ced-461a-44c7-a16e-38052acca474] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [5d973ced-461a-44c7-a16e-38052acca474] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [5d973ced-461a-44c7-a16e-38052acca474] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003916657s
addons_test.go:967: (dbg) Run:  kubectl --context addons-164332 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 ssh "cat /opt/local-path-provisioner/pvc-00b22499-75b1-465d-9e54-702d51278b65_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-164332 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-164332 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (14.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-z46zt" [8c5b5a65-2856-463b-aa22-640067a5e289] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003517018s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-86lwx" [72dffa9c-befa-40e7-b94d-e8684a0201af] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004053477s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-164332 addons disable yakd --alsologtostderr -v=1: (5.898944662s)
--- PASS: TestAddons/parallel/Yakd (10.90s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-jx2jk" [1350438a-4e00-4bc2-a74a-245c5429f7f0] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003589879s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.47s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-164332
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-164332: (16.222033768s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-164332
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-164332
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-164332
--- PASS: TestAddons/StoppedEnableDisable (16.47s)

                                                
                                    
x
+
TestCertOptions (28.3s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-668064 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-668064 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.283507011s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-668064 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-668064 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-668064 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-668064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-668064
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-668064: (2.391695197s)
--- PASS: TestCertOptions (28.30s)

                                                
                                    
x
+
TestCertExpiration (216.72s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-993666 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-993666 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (25.46741949s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-993666 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-993666 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (8.772011082s)
helpers_test.go:175: Cleaning up "cert-expiration-993666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-993666
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-993666: (2.478234207s)
--- PASS: TestCertExpiration (216.72s)

                                                
                                    
x
+
TestForceSystemdFlag (25.44s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-324974 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-324974 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.596624699s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-324974 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-324974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-324974
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-324974: (3.530363895s)
--- PASS: TestForceSystemdFlag (25.44s)

                                                
                                    
x
+
TestForceSystemdEnv (36.9s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-710265 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-710265 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.694566055s)
helpers_test.go:175: Cleaning up "force-systemd-env-710265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-710265
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-710265: (3.201398627s)
--- PASS: TestForceSystemdEnv (36.90s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.88s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0929 12:08:39.517715  747468 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0929 12:08:39.517877  747468 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate4153201704/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 12:08:39.548051  747468 install.go:163] /tmp/TestKVMDriverInstallOrUpdate4153201704/001/docker-machine-driver-kvm2 version is 1.1.1
W0929 12:08:39.548104  747468 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0929 12:08:39.548242  747468 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0929 12:08:39.548291  747468 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4153201704/001/docker-machine-driver-kvm2
I0929 12:08:40.247093  747468 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate4153201704/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 12:08:40.263144  747468 install.go:163] /tmp/TestKVMDriverInstallOrUpdate4153201704/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.88s)

                                                
                                    
x
+
TestErrorSpam/setup (19.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-008755 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-008755 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-008755 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-008755 --driver=docker  --container-runtime=crio: (19.220254564s)
--- PASS: TestErrorSpam/setup (19.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 start --dry-run
--- PASS: TestErrorSpam/start (0.61s)

                                                
                                    
x
+
TestErrorSpam/status (0.91s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 status
--- PASS: TestErrorSpam/status (0.91s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 pause
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (2.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 stop: (2.310420793s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-008755 --log_dir /tmp/nospam-008755 stop
--- PASS: TestErrorSpam/stop (2.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21655-743952/.minikube/files/etc/test/nested/copy/747468/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-550377 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0929 11:30:24.931664  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:24.938108  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:24.949524  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:24.970917  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:25.012258  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:25.093704  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:25.255218  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:25.576896  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:26.218944  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:27.500725  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:30.063204  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:30:35.185278  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-550377 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m8.156002239s)
--- PASS: TestFunctional/serial/StartWithProxy (68.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 11:30:38.063592  747468 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-550377 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-550377 --alsologtostderr -v=8: (6.348124476s)
functional_test.go:678: soft start took 6.348854503s for "functional-550377" cluster.
I0929 11:30:44.412149  747468 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (6.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-550377 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 cache add registry.k8s.io/pause:3.1
E0929 11:30:45.427180  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-550377 cache add registry.k8s.io/pause:3.1: (1.158597382s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-550377 /tmp/TestFunctionalserialCacheCmdcacheadd_local2250486165/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 cache add minikube-local-cache-test:functional-550377
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-550377 cache add minikube-local-cache-test:functional-550377: (1.829381979s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 cache delete minikube-local-cache-test:functional-550377
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-550377
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-550377 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (274.823351ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 kubectl -- --context functional-550377 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-550377 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.16s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-550377 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0929 11:31:05.908593  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-550377 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.157656476s)
functional_test.go:776: restart took 46.157817297s for "functional-550377" cluster.
I0929 11:31:38.327024  747468 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (46.16s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-550377 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-550377 logs: (1.422514738s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 logs --file /tmp/TestFunctionalserialLogsFileCmd919532758/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-550377 logs --file /tmp/TestFunctionalserialLogsFileCmd919532758/001/logs.txt: (1.417713842s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-550377 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-550377
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-550377: exit status 115 (344.558856ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30413 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-550377 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-550377 config get cpus: exit status 14 (58.926387ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-550377 config get cpus: exit status 14 (53.573738ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-550377 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-550377 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 790477: os: process already finished
E0929 11:33:08.792341  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:35:24.922729  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:35:52.633662  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:40:24.922545  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/DashboardCmd (14.11s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-550377 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-550377 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (143.853563ms)

                                                
                                                
-- stdout --
	* [functional-550377] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:32:19.851780  790074 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:32:19.852035  790074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:32:19.852043  790074 out.go:374] Setting ErrFile to fd 2...
	I0929 11:32:19.852047  790074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:32:19.852212  790074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
	I0929 11:32:19.852646  790074 out.go:368] Setting JSON to false
	I0929 11:32:19.853671  790074 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":15277,"bootTime":1759130263,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:32:19.853727  790074 start.go:140] virtualization: kvm guest
	I0929 11:32:19.855417  790074 out.go:179] * [functional-550377] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:32:19.856541  790074 notify.go:220] Checking for updates...
	I0929 11:32:19.856581  790074 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 11:32:19.857685  790074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:32:19.858775  790074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig
	I0929 11:32:19.859746  790074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube
	I0929 11:32:19.860670  790074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:32:19.861635  790074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:32:19.863171  790074 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:32:19.863757  790074 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:32:19.887109  790074 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:32:19.887215  790074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:32:19.942914  790074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 11:32:19.931321762 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:32:19.943039  790074 docker.go:318] overlay module found
	I0929 11:32:19.944621  790074 out.go:179] * Using the docker driver based on existing profile
	I0929 11:32:19.945658  790074 start.go:304] selected driver: docker
	I0929 11:32:19.945672  790074 start.go:924] validating driver "docker" against &{Name:functional-550377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-550377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:32:19.945759  790074 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:32:19.947339  790074 out.go:203] 
	W0929 11:32:19.948278  790074 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 11:32:19.949164  790074 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-550377 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-550377 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-550377 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (146.546194ms)

                                                
                                                
-- stdout --
	* [functional-550377] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:32:19.708997  789994 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:32:19.709104  789994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:32:19.709117  789994 out.go:374] Setting ErrFile to fd 2...
	I0929 11:32:19.709123  789994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:32:19.709423  789994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
	I0929 11:32:19.709874  789994 out.go:368] Setting JSON to false
	I0929 11:32:19.710929  789994 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":15277,"bootTime":1759130263,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:32:19.711047  789994 start.go:140] virtualization: kvm guest
	I0929 11:32:19.713091  789994 out.go:179] * [functional-550377] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 11:32:19.714180  789994 notify.go:220] Checking for updates...
	I0929 11:32:19.714212  789994 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 11:32:19.715212  789994 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:32:19.716151  789994 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig
	I0929 11:32:19.717219  789994 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube
	I0929 11:32:19.718354  789994 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:32:19.719302  789994 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:32:19.720583  789994 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:32:19.721028  789994 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:32:19.744066  789994 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:32:19.744203  789994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:32:19.798449  789994 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 11:32:19.78719717 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:32:19.798558  789994 docker.go:318] overlay module found
	I0929 11:32:19.800055  789994 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 11:32:19.801095  789994 start.go:304] selected driver: docker
	I0929 11:32:19.801112  789994 start.go:924] validating driver "docker" against &{Name:functional-550377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-550377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:32:19.801199  789994 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:32:19.802708  789994 out.go:203] 
	W0929 11:32:19.803656  789994 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 11:32:19.804605  789994 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (33.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [310c238d-e434-4798-b13a-5ef25366d458] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004330346s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-550377 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-550377 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-550377 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-550377 apply -f testdata/storage-provisioner/pod.yaml
I0929 11:31:53.252731  747468 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6df5a002-bbad-4e6f-b254-a9897d3882e9] Pending
helpers_test.go:352: "sp-pod" [6df5a002-bbad-4e6f-b254-a9897d3882e9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [6df5a002-bbad-4e6f-b254-a9897d3882e9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.003718538s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-550377 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-550377 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-550377 delete -f testdata/storage-provisioner/pod.yaml: (1.193806715s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-550377 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [67804604-e8a6-4e4b-86ff-01e369ff69ac] Pending
helpers_test.go:352: "sp-pod" [67804604-e8a6-4e4b-86ff-01e369ff69ac] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [67804604-e8a6-4e4b-86ff-01e369ff69ac] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003786334s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-550377 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (33.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh -n functional-550377 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 cp functional-550377:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4219465008/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh -n functional-550377 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh -n functional-550377 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (18.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-550377 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-pvxvg" [b59e3dbd-b04c-4331-a9aa-f9a2aa393615] Pending
helpers_test.go:352: "mysql-5bb876957f-pvxvg" [b59e3dbd-b04c-4331-a9aa-f9a2aa393615] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-pvxvg" [b59e3dbd-b04c-4331-a9aa-f9a2aa393615] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.003565468s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-550377 exec mysql-5bb876957f-pvxvg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-550377 exec mysql-5bb876957f-pvxvg -- mysql -ppassword -e "show databases;": exit status 1 (115.35381ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0929 11:32:02.394075  747468 retry.go:31] will retry after 689.516422ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-550377 exec mysql-5bb876957f-pvxvg -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-550377 exec mysql-5bb876957f-pvxvg -- mysql -ppassword -e "show databases;": exit status 1 (139.100511ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0929 11:32:03.223426  747468 retry.go:31] will retry after 904.68971ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-550377 exec mysql-5bb876957f-pvxvg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (18.18s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/747468/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "sudo cat /etc/test/nested/copy/747468/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/747468.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "sudo cat /etc/ssl/certs/747468.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/747468.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "sudo cat /usr/share/ca-certificates/747468.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/7474682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "sudo cat /etc/ssl/certs/7474682.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/7474682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "sudo cat /usr/share/ca-certificates/7474682.pem"
E0929 11:31:46.870755  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-550377 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-550377 ssh "sudo systemctl is-active docker": exit status 1 (302.734887ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-550377 ssh "sudo systemctl is-active containerd": exit status 1 (297.633033ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-550377 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-550377
localhost/kicbase/echo-server:functional-550377
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-550377 image ls --format short --alsologtostderr:
I0929 11:32:22.612183  790805 out.go:360] Setting OutFile to fd 1 ...
I0929 11:32:22.612447  790805 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:32:22.612457  790805 out.go:374] Setting ErrFile to fd 2...
I0929 11:32:22.612462  790805 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:32:22.612655  790805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
I0929 11:32:22.613281  790805 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:32:22.613370  790805 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:32:22.613728  790805 cli_runner.go:164] Run: docker container inspect functional-550377 --format={{.State.Status}}
I0929 11:32:22.631580  790805 ssh_runner.go:195] Run: systemctl --version
I0929 11:32:22.631627  790805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-550377
I0929 11:32:22.648288  790805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/functional-550377/id_rsa Username:docker}
I0929 11:32:22.739706  790805 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-550377 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/my-image                      │ functional-550377  │ 541af1a138aca │ 1.47MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ localhost/minikube-local-cache-test     │ functional-550377  │ a5a1d2d755ddc │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 41f689c209100 │ 197MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/nginx                 │ alpine             │ 4a86014ec6994 │ 53.9MB │
│ localhost/kicbase/echo-server           │ functional-550377  │ 9056ab77afb8e │ 4.94MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-550377 image ls --format table --alsologtostderr:
I0929 11:32:26.934405  791446 out.go:360] Setting OutFile to fd 1 ...
I0929 11:32:26.934703  791446 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:32:26.934714  791446 out.go:374] Setting ErrFile to fd 2...
I0929 11:32:26.934721  791446 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:32:26.934974  791446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
I0929 11:32:26.935626  791446 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:32:26.935742  791446 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:32:26.936203  791446 cli_runner.go:164] Run: docker container inspect functional-550377 --format={{.State.Status}}
I0929 11:32:26.954210  791446 ssh_runner.go:195] Run: systemctl --version
I0929 11:32:26.954273  791446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-550377
I0929 11:32:26.971784  791446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/functional-550377/id_rsa Username:docker}
I0929 11:32:27.070996  791446 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-550377 image ls --format json --alsologtostderr:
[{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags"
:["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53949946"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b501620
9e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-550377"],"size":"4943877"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8ef
a1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"541af1a138aca509dc490486b88aa2575881d31f2a1031d6a1bd89728d280f41","repoDigests":["localhost/my-image@sha256:4afa206060b30025df80923a114215252e5fa385a3d787bdce3617b923ad0be9"],"repo
Tags":["localhost/my-image:functional-550377"],"size":"1468193"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1
"],"size":"76103547"},{"id":"41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81","repoDigests":["docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285","docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e"],"repoTags":["docker.io/library/nginx:latest"],"size":"196550530"},{"id":"a5a1d2d755ddc7afb19667df8ec7b0f93b0c977ac9702620afd3a503f3cdba7b","repoDigests":["localhost/minikube-local-cache-test@sha256:d0841c90bff3fdb9dcd0e14dde4e84810da6e1e676e5dce50a45f866991a0e76"],"repoTags":["localhost/minikube-local-cache-test:functional-550377"],"size":"3330"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de1
9145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"7043289caedcf650094742f72541581cf86f54847ebc204f4e3f817fd8709757","repoDigests":["docker.io/library/823c95d47e691d742b8ef6f0672599471e42f9a32e649c9d9c9bef91b220306a-tmp@sha256:8d87055d79bd48b718aedf96814f72ab309f3bb2cec472b455ba2b0601d786d2"],"repoTags":[],"size":"1465612"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c657
4ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-550377 image ls --format json --alsologtostderr:
I0929 11:32:26.714287  791396 out.go:360] Setting OutFile to fd 1 ...
I0929 11:32:26.714439  791396 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:32:26.714454  791396 out.go:374] Setting ErrFile to fd 2...
I0929 11:32:26.714461  791396 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:32:26.714831  791396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
I0929 11:32:26.715613  791396 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:32:26.715746  791396 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:32:26.716258  791396 cli_runner.go:164] Run: docker container inspect functional-550377 --format={{.State.Status}}
I0929 11:32:26.735678  791396 ssh_runner.go:195] Run: systemctl --version
I0929 11:32:26.735737  791396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-550377
I0929 11:32:26.754540  791396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/functional-550377/id_rsa Username:docker}
I0929 11:32:26.848134  791396 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-550377 image ls --format yaml --alsologtostderr:
- id: a5a1d2d755ddc7afb19667df8ec7b0f93b0c977ac9702620afd3a503f3cdba7b
repoDigests:
- localhost/minikube-local-cache-test@sha256:d0841c90bff3fdb9dcd0e14dde4e84810da6e1e676e5dce50a45f866991a0e76
repoTags:
- localhost/minikube-local-cache-test:functional-550377
size: "3330"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-550377
size: "4943877"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: 41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81
repoDigests:
- docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285
- docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e
repoTags:
- docker.io/library/nginx:latest
size: "196550530"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a
repoTags:
- docker.io/library/nginx:alpine
size: "53949946"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-550377 image ls --format yaml --alsologtostderr:
I0929 11:32:22.825561  790854 out.go:360] Setting OutFile to fd 1 ...
I0929 11:32:22.825842  790854 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:32:22.825853  790854 out.go:374] Setting ErrFile to fd 2...
I0929 11:32:22.825859  790854 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:32:22.826110  790854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
I0929 11:32:22.826783  790854 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:32:22.826902  790854 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:32:22.827327  790854 cli_runner.go:164] Run: docker container inspect functional-550377 --format={{.State.Status}}
I0929 11:32:22.845074  790854 ssh_runner.go:195] Run: systemctl --version
I0929 11:32:22.845118  790854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-550377
I0929 11:32:22.861919  790854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/functional-550377/id_rsa Username:docker}
I0929 11:32:22.953787  790854 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-550377 ssh pgrep buildkitd: exit status 1 (248.638534ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image build -t localhost/my-image:functional-550377 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-550377 image build -t localhost/my-image:functional-550377 testdata/build --alsologtostderr: (3.203147425s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-550377 image build -t localhost/my-image:functional-550377 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7043289caed
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-550377
--> 541af1a138a
Successfully tagged localhost/my-image:functional-550377
541af1a138aca509dc490486b88aa2575881d31f2a1031d6a1bd89728d280f41
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-550377 image build -t localhost/my-image:functional-550377 testdata/build --alsologtostderr:
I0929 11:32:23.287268  791004 out.go:360] Setting OutFile to fd 1 ...
I0929 11:32:23.287366  791004 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:32:23.287375  791004 out.go:374] Setting ErrFile to fd 2...
I0929 11:32:23.287381  791004 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:32:23.287620  791004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
I0929 11:32:23.288237  791004 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:32:23.288879  791004 config.go:182] Loaded profile config "functional-550377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 11:32:23.289313  791004 cli_runner.go:164] Run: docker container inspect functional-550377 --format={{.State.Status}}
I0929 11:32:23.306602  791004 ssh_runner.go:195] Run: systemctl --version
I0929 11:32:23.306662  791004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-550377
I0929 11:32:23.322988  791004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/functional-550377/id_rsa Username:docker}
I0929 11:32:23.414598  791004 build_images.go:161] Building image from path: /tmp/build.2029741424.tar
I0929 11:32:23.414680  791004 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 11:32:23.424450  791004 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2029741424.tar
I0929 11:32:23.427789  791004 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2029741424.tar: stat -c "%s %y" /var/lib/minikube/build/build.2029741424.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2029741424.tar': No such file or directory
I0929 11:32:23.427818  791004 ssh_runner.go:362] scp /tmp/build.2029741424.tar --> /var/lib/minikube/build/build.2029741424.tar (3072 bytes)
I0929 11:32:23.451851  791004 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2029741424
I0929 11:32:23.460633  791004 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2029741424 -xf /var/lib/minikube/build/build.2029741424.tar
I0929 11:32:23.469527  791004 crio.go:315] Building image: /var/lib/minikube/build/build.2029741424
I0929 11:32:23.469615  791004 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-550377 /var/lib/minikube/build/build.2029741424 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0929 11:32:26.421933  791004 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-550377 /var/lib/minikube/build/build.2029741424 --cgroup-manager=cgroupfs: (2.952287971s)
I0929 11:32:26.422022  791004 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2029741424
I0929 11:32:26.431623  791004 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2029741424.tar
I0929 11:32:26.440866  791004 build_images.go:217] Built localhost/my-image:functional-550377 from /tmp/build.2029741424.tar
I0929 11:32:26.440905  791004 build_images.go:133] succeeded building to: functional-550377
I0929 11:32:26.440910  791004 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.962584108s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-550377
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image load --daemon kicbase/echo-server:functional-550377 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-550377 image load --daemon kicbase/echo-server:functional-550377 --alsologtostderr: (1.10052226s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-550377 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-550377 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-550377 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-550377 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 784594: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-550377 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-550377 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [0f06d48f-4e0e-4524-8206-6885a99c8b64] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [0f06d48f-4e0e-4524-8206-6885a99c8b64] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.003597152s
I0929 11:32:06.456026  747468 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image load --daemon kicbase/echo-server:functional-550377 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-550377
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image load --daemon kicbase/echo-server:functional-550377 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-550377 image load --daemon kicbase/echo-server:functional-550377 --alsologtostderr: (4.831568558s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image save kicbase/echo-server:functional-550377 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image rm kicbase/echo-server:functional-550377 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-550377
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 image save --daemon kicbase/echo-server:functional-550377 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-550377
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-550377 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.6.96 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-550377 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "321.640353ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "49.520253ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "321.626466ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "51.80275ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-550377 /tmp/TestFunctionalparallelMountCmdany-port3041194954/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759145527764346866" to /tmp/TestFunctionalparallelMountCmdany-port3041194954/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759145527764346866" to /tmp/TestFunctionalparallelMountCmdany-port3041194954/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759145527764346866" to /tmp/TestFunctionalparallelMountCmdany-port3041194954/001/test-1759145527764346866
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-550377 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.88836ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:32:08.031503  747468 retry.go:31] will retry after 645.727402ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 11:32 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 11:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 11:32 test-1759145527764346866
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh cat /mount-9p/test-1759145527764346866
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-550377 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [84d0cb97-a141-4559-8a94-013ffca421d8] Pending
helpers_test.go:352: "busybox-mount" [84d0cb97-a141-4559-8a94-013ffca421d8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [84d0cb97-a141-4559-8a94-013ffca421d8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [84d0cb97-a141-4559-8a94-013ffca421d8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.002516291s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-550377 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh stat /mount-9p/created-by-test
I0929 11:32:14.702950  747468 detect.go:223] nested VM detected
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-550377 /tmp/TestFunctionalparallelMountCmdany-port3041194954/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-550377 /tmp/TestFunctionalparallelMountCmdspecific-port2873214525/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-550377 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (267.485976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:32:15.812508  747468 retry.go:31] will retry after 323.314351ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-550377 /tmp/TestFunctionalparallelMountCmdspecific-port2873214525/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-550377 ssh "sudo umount -f /mount-9p": exit status 1 (263.202183ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-550377 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-550377 /tmp/TestFunctionalparallelMountCmdspecific-port2873214525/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-550377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3267768702/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-550377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3267768702/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-550377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3267768702/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-550377 ssh "findmnt -T" /mount1: exit status 1 (306.198522ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:32:17.449235  747468 retry.go:31] will retry after 483.744654ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-550377 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-550377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3267768702/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-550377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3267768702/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-550377 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3267768702/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 version -o=json --components
2025/09/29 11:32:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-550377 service list: (1.687629655s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-550377 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-550377 service list -o json: (1.686662965s)
functional_test.go:1504: Took "1.686794829s" to run "out/minikube-linux-amd64 -p functional-550377 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-550377
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-550377
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-550377
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (176.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-726636 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m56.261407503s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (176.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-726636 kubectl -- rollout status deployment/busybox: (4.759583057s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-7q4l2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-g8qxr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-s5ns2 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-7q4l2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-g8qxr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-s5ns2 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-7q4l2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-g8qxr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-s5ns2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-7q4l2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-7q4l2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-g8qxr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-g8qxr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-s5ns2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 kubectl -- exec busybox-7b57f96db7-s5ns2 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 node add --alsologtostderr -v 5
E0929 11:45:24.922602  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-726636 node add --alsologtostderr -v 5: (54.41657004s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-726636 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp testdata/cp-test.txt ha-726636:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3353827557/001/cp-test_ha-726636.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636:/home/docker/cp-test.txt ha-726636-m02:/home/docker/cp-test_ha-726636_ha-726636-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m02 "sudo cat /home/docker/cp-test_ha-726636_ha-726636-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636:/home/docker/cp-test.txt ha-726636-m03:/home/docker/cp-test_ha-726636_ha-726636-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m03 "sudo cat /home/docker/cp-test_ha-726636_ha-726636-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636:/home/docker/cp-test.txt ha-726636-m04:/home/docker/cp-test_ha-726636_ha-726636-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m04 "sudo cat /home/docker/cp-test_ha-726636_ha-726636-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp testdata/cp-test.txt ha-726636-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3353827557/001/cp-test_ha-726636-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636-m02:/home/docker/cp-test.txt ha-726636:/home/docker/cp-test_ha-726636-m02_ha-726636.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636 "sudo cat /home/docker/cp-test_ha-726636-m02_ha-726636.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636-m02:/home/docker/cp-test.txt ha-726636-m03:/home/docker/cp-test_ha-726636-m02_ha-726636-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m03 "sudo cat /home/docker/cp-test_ha-726636-m02_ha-726636-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636-m02:/home/docker/cp-test.txt ha-726636-m04:/home/docker/cp-test_ha-726636-m02_ha-726636-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m04 "sudo cat /home/docker/cp-test_ha-726636-m02_ha-726636-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp testdata/cp-test.txt ha-726636-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3353827557/001/cp-test_ha-726636-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636-m03:/home/docker/cp-test.txt ha-726636:/home/docker/cp-test_ha-726636-m03_ha-726636.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636 "sudo cat /home/docker/cp-test_ha-726636-m03_ha-726636.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636-m03:/home/docker/cp-test.txt ha-726636-m02:/home/docker/cp-test_ha-726636-m03_ha-726636-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m02 "sudo cat /home/docker/cp-test_ha-726636-m03_ha-726636-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636-m03:/home/docker/cp-test.txt ha-726636-m04:/home/docker/cp-test_ha-726636-m03_ha-726636-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m04 "sudo cat /home/docker/cp-test_ha-726636-m03_ha-726636-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp testdata/cp-test.txt ha-726636-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3353827557/001/cp-test_ha-726636-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636-m04:/home/docker/cp-test.txt ha-726636:/home/docker/cp-test_ha-726636-m04_ha-726636.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636 "sudo cat /home/docker/cp-test_ha-726636-m04_ha-726636.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636-m04:/home/docker/cp-test.txt ha-726636-m02:/home/docker/cp-test_ha-726636-m04_ha-726636-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m02 "sudo cat /home/docker/cp-test_ha-726636-m04_ha-726636-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 cp ha-726636-m04:/home/docker/cp-test.txt ha-726636-m03:/home/docker/cp-test_ha-726636-m04_ha-726636-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 ssh -n ha-726636-m03 "sudo cat /home/docker/cp-test_ha-726636-m04_ha-726636-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 node stop m02 --alsologtostderr -v 5
E0929 11:46:46.275182  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:46.281565  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:46.292933  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:46.314357  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:46.355831  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:46.437345  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:46.598911  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-726636 node stop m02 --alsologtostderr -v 5: (13.606701503s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 status --alsologtostderr -v 5
E0929 11:46:46.920232  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-726636 status --alsologtostderr -v 5: exit status 7 (685.606481ms)

                                                
                                                
-- stdout --
	ha-726636
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-726636-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-726636-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-726636-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:46:46.750599  817074 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:46:46.750884  817074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:46:46.750895  817074 out.go:374] Setting ErrFile to fd 2...
	I0929 11:46:46.750900  817074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:46:46.751171  817074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
	I0929 11:46:46.751400  817074 out.go:368] Setting JSON to false
	I0929 11:46:46.751441  817074 mustload.go:65] Loading cluster: ha-726636
	I0929 11:46:46.751506  817074 notify.go:220] Checking for updates...
	I0929 11:46:46.751850  817074 config.go:182] Loaded profile config "ha-726636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:46:46.751872  817074 status.go:174] checking status of ha-726636 ...
	I0929 11:46:46.752507  817074 cli_runner.go:164] Run: docker container inspect ha-726636 --format={{.State.Status}}
	I0929 11:46:46.771324  817074 status.go:371] ha-726636 host status = "Running" (err=<nil>)
	I0929 11:46:46.771361  817074 host.go:66] Checking if "ha-726636" exists ...
	I0929 11:46:46.771620  817074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-726636
	I0929 11:46:46.790019  817074 host.go:66] Checking if "ha-726636" exists ...
	I0929 11:46:46.790371  817074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:46:46.790436  817074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-726636
	I0929 11:46:46.808489  817074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/ha-726636/id_rsa Username:docker}
	I0929 11:46:46.902792  817074 ssh_runner.go:195] Run: systemctl --version
	I0929 11:46:46.907450  817074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:46:46.920088  817074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:46:46.978403  817074 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 11:46:46.967556422 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:46:46.979019  817074 kubeconfig.go:125] found "ha-726636" server: "https://192.168.49.254:8443"
	I0929 11:46:46.979061  817074 api_server.go:166] Checking apiserver status ...
	I0929 11:46:46.979106  817074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:46:46.991762  817074 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	W0929 11:46:47.002035  817074 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:46:47.002088  817074 ssh_runner.go:195] Run: ls
	I0929 11:46:47.005889  817074 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 11:46:47.010074  817074 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 11:46:47.010100  817074 status.go:463] ha-726636 apiserver status = Running (err=<nil>)
	I0929 11:46:47.010112  817074 status.go:176] ha-726636 status: &{Name:ha-726636 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:46:47.010136  817074 status.go:174] checking status of ha-726636-m02 ...
	I0929 11:46:47.010443  817074 cli_runner.go:164] Run: docker container inspect ha-726636-m02 --format={{.State.Status}}
	I0929 11:46:47.029468  817074 status.go:371] ha-726636-m02 host status = "Stopped" (err=<nil>)
	I0929 11:46:47.029489  817074 status.go:384] host is not running, skipping remaining checks
	I0929 11:46:47.029498  817074 status.go:176] ha-726636-m02 status: &{Name:ha-726636-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:46:47.029528  817074 status.go:174] checking status of ha-726636-m03 ...
	I0929 11:46:47.029783  817074 cli_runner.go:164] Run: docker container inspect ha-726636-m03 --format={{.State.Status}}
	I0929 11:46:47.047690  817074 status.go:371] ha-726636-m03 host status = "Running" (err=<nil>)
	I0929 11:46:47.047718  817074 host.go:66] Checking if "ha-726636-m03" exists ...
	I0929 11:46:47.048017  817074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-726636-m03
	I0929 11:46:47.065771  817074 host.go:66] Checking if "ha-726636-m03" exists ...
	I0929 11:46:47.066065  817074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:46:47.066107  817074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-726636-m03
	I0929 11:46:47.083602  817074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/ha-726636-m03/id_rsa Username:docker}
	I0929 11:46:47.177046  817074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:46:47.191604  817074 kubeconfig.go:125] found "ha-726636" server: "https://192.168.49.254:8443"
	I0929 11:46:47.191634  817074 api_server.go:166] Checking apiserver status ...
	I0929 11:46:47.191667  817074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:46:47.203619  817074 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1369/cgroup
	W0929 11:46:47.213658  817074 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1369/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:46:47.213708  817074 ssh_runner.go:195] Run: ls
	I0929 11:46:47.217666  817074 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 11:46:47.222090  817074 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 11:46:47.222113  817074 status.go:463] ha-726636-m03 apiserver status = Running (err=<nil>)
	I0929 11:46:47.222121  817074 status.go:176] ha-726636-m03 status: &{Name:ha-726636-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:46:47.222137  817074 status.go:174] checking status of ha-726636-m04 ...
	I0929 11:46:47.222381  817074 cli_runner.go:164] Run: docker container inspect ha-726636-m04 --format={{.State.Status}}
	I0929 11:46:47.240074  817074 status.go:371] ha-726636-m04 host status = "Running" (err=<nil>)
	I0929 11:46:47.240104  817074 host.go:66] Checking if "ha-726636-m04" exists ...
	I0929 11:46:47.240436  817074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-726636-m04
	I0929 11:46:47.259228  817074 host.go:66] Checking if "ha-726636-m04" exists ...
	I0929 11:46:47.259513  817074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:46:47.259574  817074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-726636-m04
	I0929 11:46:47.278112  817074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/ha-726636-m04/id_rsa Username:docker}
	I0929 11:46:47.371596  817074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:46:47.383716  817074 status.go:176] ha-726636-m04 status: &{Name:ha-726636-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0929 11:46:47.562214  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:47.994973  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 node start m02 --alsologtostderr -v 5
E0929 11:46:48.843703  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:46:51.406878  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-726636 node start m02 --alsologtostderr -v 5: (8.132613643s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 status --alsologtostderr -v 5
E0929 11:46:56.528784  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 stop --alsologtostderr -v 5
E0929 11:47:06.770627  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:47:27.252146  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-726636 stop --alsologtostderr -v 5: (50.08707934s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 start --wait true --alsologtostderr -v 5
E0929 11:48:08.213938  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-726636 start --wait true --alsologtostderr -v 5: (58.308001257s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-726636 node delete m03 --alsologtostderr -v 5: (10.576792611s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (48.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 stop --alsologtostderr -v 5
E0929 11:49:30.136182  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-726636 stop --alsologtostderr -v 5: (48.704183713s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-726636 status --alsologtostderr -v 5: exit status 7 (115.354776ms)

                                                
                                                
-- stdout --
	ha-726636
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-726636-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-726636-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:49:47.432215  833370 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:49:47.432506  833370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:49:47.432514  833370 out.go:374] Setting ErrFile to fd 2...
	I0929 11:49:47.432518  833370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:49:47.432751  833370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
	I0929 11:49:47.432942  833370 out.go:368] Setting JSON to false
	I0929 11:49:47.432990  833370 mustload.go:65] Loading cluster: ha-726636
	I0929 11:49:47.433132  833370 notify.go:220] Checking for updates...
	I0929 11:49:47.433491  833370 config.go:182] Loaded profile config "ha-726636": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:49:47.433515  833370 status.go:174] checking status of ha-726636 ...
	I0929 11:49:47.434082  833370 cli_runner.go:164] Run: docker container inspect ha-726636 --format={{.State.Status}}
	I0929 11:49:47.455620  833370 status.go:371] ha-726636 host status = "Stopped" (err=<nil>)
	I0929 11:49:47.455653  833370 status.go:384] host is not running, skipping remaining checks
	I0929 11:49:47.455660  833370 status.go:176] ha-726636 status: &{Name:ha-726636 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:49:47.455686  833370 status.go:174] checking status of ha-726636-m02 ...
	I0929 11:49:47.455958  833370 cli_runner.go:164] Run: docker container inspect ha-726636-m02 --format={{.State.Status}}
	I0929 11:49:47.475015  833370 status.go:371] ha-726636-m02 host status = "Stopped" (err=<nil>)
	I0929 11:49:47.475044  833370 status.go:384] host is not running, skipping remaining checks
	I0929 11:49:47.475051  833370 status.go:176] ha-726636-m02 status: &{Name:ha-726636-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:49:47.475080  833370 status.go:174] checking status of ha-726636-m04 ...
	I0929 11:49:47.475420  833370 cli_runner.go:164] Run: docker container inspect ha-726636-m04 --format={{.State.Status}}
	I0929 11:49:47.494545  833370 status.go:371] ha-726636-m04 host status = "Stopped" (err=<nil>)
	I0929 11:49:47.494580  833370 status.go:384] host is not running, skipping remaining checks
	I0929 11:49:47.494590  833370 status.go:176] ha-726636-m04 status: &{Name:ha-726636-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (48.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (57.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0929 11:50:24.923290  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-726636 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (57.173553679s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (57.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (35.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-726636 node add --control-plane --alsologtostderr -v 5: (34.79746612s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-726636 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (35.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-554185 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0929 11:51:46.275992  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:52:13.985176  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-554185 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m8.36368159s)
--- PASS: TestJSONOutput/start/Command (68.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-554185 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-554185 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-554185 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-554185 --output=json --user=testUser: (6.086139391s)
--- PASS: TestJSONOutput/stop/Command (6.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-114057 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-114057 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (62.307204ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ffdc4a83-c37d-4941-ba74-2503693be051","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-114057] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1ef34e23-6aed-417e-9b2f-4a124b29c387","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21655"}}
	{"specversion":"1.0","id":"40fdfb9f-2a15-4fa6-acd0-96e868601c69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a7998875-b20d-4109-b760-beb986ddd789","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig"}}
	{"specversion":"1.0","id":"0131c89d-cee4-4efb-af51-36bab0a7ea2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube"}}
	{"specversion":"1.0","id":"b11dfdf8-9fe1-4564-a952-604b5dc24724","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"050b3d0b-a015-49a0-a1d5-5f36be9c76f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f60ed2f8-95d2-4662-95b9-ff7b020a45ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-114057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-114057
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-280965 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-280965 --network=: (34.478188959s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-280965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-280965
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-280965: (2.150476858s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.65s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-140278 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-140278 --network=bridge: (23.443256471s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-140278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-140278
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-140278: (1.978915559s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.44s)

                                                
                                    
x
+
TestKicExistingNetwork (24.16s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0929 11:53:54.033046  747468 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0929 11:53:54.050663  747468 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0929 11:53:54.050747  747468 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0929 11:53:54.050765  747468 cli_runner.go:164] Run: docker network inspect existing-network
W0929 11:53:54.068325  747468 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0929 11:53:54.068362  747468 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0929 11:53:54.068385  747468 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0929 11:53:54.068567  747468 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0929 11:53:54.086635  747468 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-78cef99c6d16 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:06:45:e4:7f:ab:f7} reservation:<nil>}
I0929 11:53:54.087172  747468 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c7b940}
I0929 11:53:54.087224  747468 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0929 11:53:54.087289  747468 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0929 11:53:54.144794  747468 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-821977 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-821977 --network=existing-network: (22.054783725s)
helpers_test.go:175: Cleaning up "existing-network-821977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-821977
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-821977: (1.953688196s)
I0929 11:54:18.171196  747468 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.16s)

                                                
                                    
x
+
TestKicCustomSubnet (24.63s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-442462 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-442462 --subnet=192.168.60.0/24: (22.439013823s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-442462 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-442462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-442462
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-442462: (2.17287035s)
--- PASS: TestKicCustomSubnet (24.63s)

                                                
                                    
x
+
TestKicStaticIP (25.95s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-333397 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-333397 --static-ip=192.168.200.200: (23.587739359s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-333397 ip
helpers_test.go:175: Cleaning up "static-ip-333397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-333397
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-333397: (2.218557916s)
--- PASS: TestKicStaticIP (25.95s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (48.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-156582 --driver=docker  --container-runtime=crio
E0929 11:55:24.929656  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-156582 --driver=docker  --container-runtime=crio: (20.39131274s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-170864 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-170864 --driver=docker  --container-runtime=crio: (21.729794725s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-156582
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-170864
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-170864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-170864
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-170864: (2.405909828s)
helpers_test.go:175: Cleaning up "first-156582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-156582
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-156582: (2.446762506s)
--- PASS: TestMinikubeProfile (48.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-070644 --memory=3072 --mount-string /tmp/TestMountStartserial4185169001/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-070644 --memory=3072 --mount-string /tmp/TestMountStartserial4185169001/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.714145361s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-070644 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-086860 --memory=3072 --mount-string /tmp/TestMountStartserial4185169001/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-086860 --memory=3072 --mount-string /tmp/TestMountStartserial4185169001/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.409726759s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-086860 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-070644 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-070644 --alsologtostderr -v=5: (1.67192112s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-086860 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-086860
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-086860: (1.193457529s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-086860
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-086860: (6.727983359s)
--- PASS: TestMountStart/serial/RestartStopped (7.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-086860 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-411391 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0929 11:56:46.275656  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-411391 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m31.904298298s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-411391 -- rollout status deployment/busybox: (4.143468739s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- exec busybox-7b57f96db7-8nhcb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- exec busybox-7b57f96db7-f9m5s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- exec busybox-7b57f96db7-8nhcb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- exec busybox-7b57f96db7-f9m5s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- exec busybox-7b57f96db7-8nhcb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- exec busybox-7b57f96db7-f9m5s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- exec busybox-7b57f96db7-8nhcb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- exec busybox-7b57f96db7-8nhcb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- exec busybox-7b57f96db7-f9m5s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-411391 -- exec busybox-7b57f96db7-f9m5s -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-411391 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-411391 -v=5 --alsologtostderr: (23.97557462s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.63s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-411391 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 cp testdata/cp-test.txt multinode-411391:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 cp multinode-411391:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3942765031/001/cp-test_multinode-411391.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 cp multinode-411391:/home/docker/cp-test.txt multinode-411391-m02:/home/docker/cp-test_multinode-411391_multinode-411391-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391-m02 "sudo cat /home/docker/cp-test_multinode-411391_multinode-411391-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 cp multinode-411391:/home/docker/cp-test.txt multinode-411391-m03:/home/docker/cp-test_multinode-411391_multinode-411391-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391-m03 "sudo cat /home/docker/cp-test_multinode-411391_multinode-411391-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 cp testdata/cp-test.txt multinode-411391-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 cp multinode-411391-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3942765031/001/cp-test_multinode-411391-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 cp multinode-411391-m02:/home/docker/cp-test.txt multinode-411391:/home/docker/cp-test_multinode-411391-m02_multinode-411391.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391 "sudo cat /home/docker/cp-test_multinode-411391-m02_multinode-411391.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 cp multinode-411391-m02:/home/docker/cp-test.txt multinode-411391-m03:/home/docker/cp-test_multinode-411391-m02_multinode-411391-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391-m03 "sudo cat /home/docker/cp-test_multinode-411391-m02_multinode-411391-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 cp testdata/cp-test.txt multinode-411391-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 cp multinode-411391-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3942765031/001/cp-test_multinode-411391-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 cp multinode-411391-m03:/home/docker/cp-test.txt multinode-411391:/home/docker/cp-test_multinode-411391-m03_multinode-411391.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391 "sudo cat /home/docker/cp-test_multinode-411391-m03_multinode-411391.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 cp multinode-411391-m03:/home/docker/cp-test.txt multinode-411391-m02:/home/docker/cp-test_multinode-411391-m03_multinode-411391-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 ssh -n multinode-411391-m02 "sudo cat /home/docker/cp-test_multinode-411391-m03_multinode-411391-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-411391 node stop m03: (1.211139355s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-411391 status: exit status 7 (494.665902ms)

                                                
                                                
-- stdout --
	multinode-411391
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-411391-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-411391-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-411391 status --alsologtostderr: exit status 7 (485.530049ms)

                                                
                                                
-- stdout --
	multinode-411391
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-411391-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-411391-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:58:39.529692  896128 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:58:39.530013  896128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:58:39.530025  896128 out.go:374] Setting ErrFile to fd 2...
	I0929 11:58:39.530031  896128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:58:39.530217  896128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
	I0929 11:58:39.530417  896128 out.go:368] Setting JSON to false
	I0929 11:58:39.530464  896128 mustload.go:65] Loading cluster: multinode-411391
	I0929 11:58:39.530595  896128 notify.go:220] Checking for updates...
	I0929 11:58:39.530904  896128 config.go:182] Loaded profile config "multinode-411391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:58:39.530929  896128 status.go:174] checking status of multinode-411391 ...
	I0929 11:58:39.531434  896128 cli_runner.go:164] Run: docker container inspect multinode-411391 --format={{.State.Status}}
	I0929 11:58:39.551488  896128 status.go:371] multinode-411391 host status = "Running" (err=<nil>)
	I0929 11:58:39.551527  896128 host.go:66] Checking if "multinode-411391" exists ...
	I0929 11:58:39.551798  896128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-411391
	I0929 11:58:39.569541  896128 host.go:66] Checking if "multinode-411391" exists ...
	I0929 11:58:39.569810  896128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:58:39.569862  896128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-411391
	I0929 11:58:39.587443  896128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33024 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/multinode-411391/id_rsa Username:docker}
	I0929 11:58:39.682734  896128 ssh_runner.go:195] Run: systemctl --version
	I0929 11:58:39.687516  896128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:58:39.699895  896128 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:58:39.756387  896128 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 11:58:39.74511198 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:58:39.756920  896128 kubeconfig.go:125] found "multinode-411391" server: "https://192.168.67.2:8443"
	I0929 11:58:39.756955  896128 api_server.go:166] Checking apiserver status ...
	I0929 11:58:39.757011  896128 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:58:39.768988  896128 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1438/cgroup
	W0929 11:58:39.779588  896128 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1438/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:58:39.779651  896128 ssh_runner.go:195] Run: ls
	I0929 11:58:39.783398  896128 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0929 11:58:39.787793  896128 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0929 11:58:39.787816  896128 status.go:463] multinode-411391 apiserver status = Running (err=<nil>)
	I0929 11:58:39.787827  896128 status.go:176] multinode-411391 status: &{Name:multinode-411391 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:58:39.787843  896128 status.go:174] checking status of multinode-411391-m02 ...
	I0929 11:58:39.788119  896128 cli_runner.go:164] Run: docker container inspect multinode-411391-m02 --format={{.State.Status}}
	I0929 11:58:39.805952  896128 status.go:371] multinode-411391-m02 host status = "Running" (err=<nil>)
	I0929 11:58:39.805997  896128 host.go:66] Checking if "multinode-411391-m02" exists ...
	I0929 11:58:39.806301  896128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-411391-m02
	I0929 11:58:39.824064  896128 host.go:66] Checking if "multinode-411391-m02" exists ...
	I0929 11:58:39.824338  896128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:58:39.824381  896128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-411391-m02
	I0929 11:58:39.841401  896128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33029 SSHKeyPath:/home/jenkins/minikube-integration/21655-743952/.minikube/machines/multinode-411391-m02/id_rsa Username:docker}
	I0929 11:58:39.936349  896128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:58:39.948388  896128 status.go:176] multinode-411391-m02 status: &{Name:multinode-411391-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:58:39.948427  896128 status.go:174] checking status of multinode-411391-m03 ...
	I0929 11:58:39.948685  896128 cli_runner.go:164] Run: docker container inspect multinode-411391-m03 --format={{.State.Status}}
	I0929 11:58:39.966320  896128 status.go:371] multinode-411391-m03 host status = "Stopped" (err=<nil>)
	I0929 11:58:39.966344  896128 status.go:384] host is not running, skipping remaining checks
	I0929 11:58:39.966353  896128 status.go:176] multinode-411391-m03 status: &{Name:multinode-411391-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-411391 node start m03 -v=5 --alsologtostderr: (6.77215584s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-411391
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-411391
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-411391: (31.423293059s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-411391 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-411391 --wait=true -v=5 --alsologtostderr: (49.041983632s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-411391
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-411391 node delete m03: (4.72685449s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 stop
E0929 12:00:24.922928  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-411391 stop: (30.430814391s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-411391 status: exit status 7 (92.963606ms)

                                                
                                                
-- stdout --
	multinode-411391
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-411391-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-411391 status --alsologtostderr: exit status 7 (97.697105ms)

                                                
                                                
-- stdout --
	multinode-411391
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-411391-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:00:44.012952  906355 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:00:44.013120  906355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:00:44.013133  906355 out.go:374] Setting ErrFile to fd 2...
	I0929 12:00:44.013140  906355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:00:44.013363  906355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
	I0929 12:00:44.013561  906355 out.go:368] Setting JSON to false
	I0929 12:00:44.013600  906355 mustload.go:65] Loading cluster: multinode-411391
	I0929 12:00:44.013765  906355 notify.go:220] Checking for updates...
	I0929 12:00:44.014036  906355 config.go:182] Loaded profile config "multinode-411391": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:00:44.014060  906355 status.go:174] checking status of multinode-411391 ...
	I0929 12:00:44.014539  906355 cli_runner.go:164] Run: docker container inspect multinode-411391 --format={{.State.Status}}
	I0929 12:00:44.036729  906355 status.go:371] multinode-411391 host status = "Stopped" (err=<nil>)
	I0929 12:00:44.036784  906355 status.go:384] host is not running, skipping remaining checks
	I0929 12:00:44.036793  906355 status.go:176] multinode-411391 status: &{Name:multinode-411391 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 12:00:44.036825  906355 status.go:174] checking status of multinode-411391-m02 ...
	I0929 12:00:44.037144  906355 cli_runner.go:164] Run: docker container inspect multinode-411391-m02 --format={{.State.Status}}
	I0929 12:00:44.057154  906355 status.go:371] multinode-411391-m02 host status = "Stopped" (err=<nil>)
	I0929 12:00:44.057183  906355 status.go:384] host is not running, skipping remaining checks
	I0929 12:00:44.057192  906355 status.go:176] multinode-411391-m02 status: &{Name:multinode-411391-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-411391 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-411391 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (49.573458131s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-411391 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.18s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-411391
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-411391-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-411391-m02 --driver=docker  --container-runtime=crio: exit status 14 (68.523281ms)

                                                
                                                
-- stdout --
	* [multinode-411391-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-411391-m02' is duplicated with machine name 'multinode-411391-m02' in profile 'multinode-411391'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-411391-m03 --driver=docker  --container-runtime=crio
E0929 12:01:46.275154  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-411391-m03 --driver=docker  --container-runtime=crio: (23.291220712s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-411391
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-411391: exit status 80 (306.011428ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-411391 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-411391-m03 already exists in multinode-411391-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-411391-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-411391-m03: (2.42275356s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.14s)

                                                
                                    
x
+
TestPreload (119.62s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-239752 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-239752 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (51.859738301s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-239752 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-239752 image pull gcr.io/k8s-minikube/busybox: (3.349088945s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-239752
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-239752: (5.857298693s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-239752 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0929 12:03:09.348182  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:03:27.997104  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-239752 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (55.898443813s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-239752 image list
helpers_test.go:175: Cleaning up "test-preload-239752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-239752
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-239752: (2.423715987s)
--- PASS: TestPreload (119.62s)

                                                
                                    
x
+
TestScheduledStopUnix (98.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-578841 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-578841 --memory=3072 --driver=docker  --container-runtime=crio: (23.136411126s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-578841 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-578841 -n scheduled-stop-578841
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-578841 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 12:04:27.816448  747468 retry.go:31] will retry after 84.839µs: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.817633  747468 retry.go:31] will retry after 81.209µs: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.818825  747468 retry.go:31] will retry after 212.951µs: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.819982  747468 retry.go:31] will retry after 401.553µs: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.821113  747468 retry.go:31] will retry after 546.291µs: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.822243  747468 retry.go:31] will retry after 436.433µs: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.823375  747468 retry.go:31] will retry after 1.696226ms: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.825575  747468 retry.go:31] will retry after 1.930893ms: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.827781  747468 retry.go:31] will retry after 1.477152ms: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.830004  747468 retry.go:31] will retry after 3.696472ms: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.834224  747468 retry.go:31] will retry after 3.860287ms: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.838470  747468 retry.go:31] will retry after 11.494856ms: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.850739  747468 retry.go:31] will retry after 17.106984ms: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.867983  747468 retry.go:31] will retry after 23.078421ms: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
I0929 12:04:27.891211  747468 retry.go:31] will retry after 16.244662ms: open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/scheduled-stop-578841/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-578841 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-578841 -n scheduled-stop-578841
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-578841
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-578841 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0929 12:05:24.930014  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-578841
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-578841: exit status 7 (67.9502ms)

                                                
                                                
-- stdout --
	scheduled-stop-578841
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-578841 -n scheduled-stop-578841
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-578841 -n scheduled-stop-578841: exit status 7 (68.930706ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-578841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-578841
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-578841: (4.389694167s)
--- PASS: TestScheduledStopUnix (98.88s)

                                                
                                    
x
+
TestInsufficientStorage (10.1s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-624149 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-624149 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.698820335s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cfbc8d2d-6455-4f25-8cba-09cade285e35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-624149] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"039e2b27-4ff3-4799-a5d4-729fd547ef69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21655"}}
	{"specversion":"1.0","id":"88989f75-1240-48f3-9648-f61a28d7cff8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a17fb5b4-7ff4-468a-849b-089b4b1d340e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig"}}
	{"specversion":"1.0","id":"7da1e8c4-3c80-46e1-8c8a-10390bc0118d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube"}}
	{"specversion":"1.0","id":"4ebda0ab-985b-4bab-acfe-2cb181bcb6c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a2edd261-94f6-4c63-8a72-dfc1d1ded4a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9d35979e-f147-4a36-b9bc-e9582cd0d6e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b752e2a1-2f16-4757-b6f4-6d7585b10f20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"999fe357-cf4e-496f-9589-6cab2771af5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"64593ad5-e2fe-4ef9-9207-a75fdd62c809","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d1d7b2db-621d-4f5d-9cf0-1d184c213b0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-624149\" primary control-plane node in \"insufficient-storage-624149\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"768af1af-95a6-4dc5-bcf6-b7377c15df5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca65e69b-78a0-4315-844a-a36efc137546","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"57bd8a0b-632d-4d1c-9b28-df36c806c2f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-624149 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-624149 --output=json --layout=cluster: exit status 7 (273.630591ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-624149","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-624149","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 12:05:51.097377  928611 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-624149" does not appear in /home/jenkins/minikube-integration/21655-743952/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-624149 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-624149 --output=json --layout=cluster: exit status 7 (270.200871ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-624149","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-624149","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 12:05:51.367973  928717 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-624149" does not appear in /home/jenkins/minikube-integration/21655-743952/kubeconfig
	E0929 12:05:51.378929  928717 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/insufficient-storage-624149/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-624149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-624149
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-624149: (1.856197989s)
--- PASS: TestInsufficientStorage (10.10s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (51.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2967007906 start -p running-upgrade-620142 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2967007906 start -p running-upgrade-620142 --memory=3072 --vm-driver=docker  --container-runtime=crio: (24.414212436s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-620142 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-620142 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.35077575s)
helpers_test.go:175: Cleaning up "running-upgrade-620142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-620142
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-620142: (2.379628569s)
--- PASS: TestRunningBinaryUpgrade (51.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (295.72s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-115625 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-115625 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.354368863s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-115625
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-115625: (1.852642912s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-115625 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-115625 status --format={{.Host}}: exit status 7 (68.311689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-115625 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-115625 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.051008211s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-115625 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-115625 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-115625 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (68.228735ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-115625] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-115625
	    minikube start -p kubernetes-upgrade-115625 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1156252 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-115625 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-115625 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-115625 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.505218597s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-115625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-115625
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-115625: (2.749734856s)
--- PASS: TestKubernetesUpgrade (295.72s)

                                                
                                    
x
+
TestMissingContainerUpgrade (97.14s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1826766005 start -p missing-upgrade-948056 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1826766005 start -p missing-upgrade-948056 --memory=3072 --driver=docker  --container-runtime=crio: (50.95816239s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-948056
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-948056: (1.691098306s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-948056
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-948056 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-948056 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.375191337s)
helpers_test.go:175: Cleaning up "missing-upgrade-948056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-948056
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-948056: (2.564467168s)
--- PASS: TestMissingContainerUpgrade (97.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657784 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-657784 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (83.160386ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-657784] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657784 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-657784 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.274802468s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-657784 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657784 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0929 12:06:46.275044  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-657784 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.901179109s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-657784 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-657784 status -o json: exit status 2 (279.368983ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-657784","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-657784
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-657784: (1.913053927s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657784 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-657784 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.45108454s)
--- PASS: TestNoKubernetes/serial/Start (7.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-657784 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-657784 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.906712ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-657784
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-657784: (1.211075754s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-657784 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-657784 --driver=docker  --container-runtime=crio: (6.793909011s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-657784 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-657784 "sudo systemctl is-active --quiet service kubelet": exit status 1 (263.966546ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (40.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1718236812 start -p stopped-upgrade-629822 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1718236812 start -p stopped-upgrade-629822 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.777628142s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1718236812 -p stopped-upgrade-629822 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1718236812 -p stopped-upgrade-629822 stop: (2.35297411s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-629822 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-629822 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (13.920251925s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (40.05s)

                                                
                                    
x
+
TestPause/serial/Start (44.77s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-435881 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-435881 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (44.765614321s)
--- PASS: TestPause/serial/Start (44.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-629822
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-629822: (1.033754165s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-049623 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-049623 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (143.455677ms)

                                                
                                                
-- stdout --
	* [false-049623] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:08:05.543251  968326 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:08:05.543550  968326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:08:05.543562  968326 out.go:374] Setting ErrFile to fd 2...
	I0929 12:08:05.543566  968326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:08:05.543803  968326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-743952/.minikube/bin
	I0929 12:08:05.544343  968326 out.go:368] Setting JSON to false
	I0929 12:08:05.545600  968326 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":17422,"bootTime":1759130263,"procs":286,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:08:05.545705  968326 start.go:140] virtualization: kvm guest
	I0929 12:08:05.547407  968326 out.go:179] * [false-049623] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:08:05.548443  968326 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:08:05.548468  968326 notify.go:220] Checking for updates...
	I0929 12:08:05.550500  968326 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:08:05.551681  968326 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-743952/kubeconfig
	I0929 12:08:05.552579  968326 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-743952/.minikube
	I0929 12:08:05.553428  968326 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:08:05.554402  968326 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:08:05.555692  968326 config.go:182] Loaded profile config "kubernetes-upgrade-115625": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:08:05.555786  968326 config.go:182] Loaded profile config "pause-435881": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 12:08:05.555864  968326 config.go:182] Loaded profile config "running-upgrade-620142": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I0929 12:08:05.555945  968326 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:08:05.578861  968326 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:08:05.578933  968326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:08:05.632601  968326 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 12:08:05.623187995 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652174848 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:08:05.632721  968326 docker.go:318] overlay module found
	I0929 12:08:05.634212  968326 out.go:179] * Using the docker driver based on user configuration
	I0929 12:08:05.635228  968326 start.go:304] selected driver: docker
	I0929 12:08:05.635243  968326 start.go:924] validating driver "docker" against <nil>
	I0929 12:08:05.635254  968326 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:08:05.636618  968326 out.go:203] 
	W0929 12:08:05.637528  968326 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0929 12:08:05.638345  968326 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-049623 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-049623

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-049623

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-049623

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-049623

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-049623

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-049623

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-049623

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-049623

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-049623

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-049623

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-049623

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-049623" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-049623" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21655-743952/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 12:07:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-115625
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21655-743952/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 12:08:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-435881
contexts:
- context:
cluster: kubernetes-upgrade-115625
user: kubernetes-upgrade-115625
name: kubernetes-upgrade-115625
- context:
cluster: pause-435881
extensions:
- extension:
last-update: Mon, 29 Sep 2025 12:08:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-435881
name: pause-435881
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-115625
user:
client-certificate: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/kubernetes-upgrade-115625/client.crt
client-key: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/kubernetes-upgrade-115625/client.key
- name: pause-435881
user:
client-certificate: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/pause-435881/client.crt
client-key: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/pause-435881/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-049623

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-049623"

                                                
                                                
----------------------- debugLogs end: false-049623 [took: 3.065762317s] --------------------------------
helpers_test.go:175: Cleaning up "false-049623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-049623
--- PASS: TestNetworkPlugins/group/false (3.37s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.6s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-435881 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-435881 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (8.579305075s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.60s)

                                                
                                    
x
+
TestPause/serial/Pause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-435881 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.65s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-435881 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-435881 --output=json --layout=cluster: exit status 2 (308.91828ms)

                                                
                                                
-- stdout --
	{"Name":"pause-435881","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-435881","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-435881 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.67s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-435881 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.67s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-435881 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-435881 --alsologtostderr -v=5: (2.807273057s)
--- PASS: TestPause/serial/DeletePaused (2.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (4.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.375069003s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-435881
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-435881: exit status 1 (16.695213ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-435881: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (4.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-897200 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-897200 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.832027426s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-726845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-726845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (57.478499262s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-897200 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bbbd909a-ef5c-44b1-944e-ee25088930ae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bbbd909a-ef5c-44b1-944e-ee25088930ae] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003738662s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-897200 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-897200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-897200 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-897200 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-897200 --alsologtostderr -v=3: (16.174327669s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-726845 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [65caf14a-c5c5-4aa5-8267-ffb154b73752] Pending
helpers_test.go:352: "busybox" [65caf14a-c5c5-4aa5-8267-ffb154b73752] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [65caf14a-c5c5-4aa5-8267-ffb154b73752] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003645456s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-726845 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897200 -n old-k8s-version-897200
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897200 -n old-k8s-version-897200: exit status 7 (66.940043ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-897200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-897200 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-897200 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (45.646026805s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-897200 -n old-k8s-version-897200
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-726845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-726845 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-726845 --alsologtostderr -v=3
E0929 12:10:24.922479  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-726845 --alsologtostderr -v=3: (18.441642358s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-726845 -n no-preload-726845
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-726845 -n no-preload-726845: exit status 7 (84.338291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-726845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (44.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-726845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-726845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (44.298668063s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-726845 -n no-preload-726845
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (44.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4wbzn" [3bc2060a-b433-4c1c-81a6-002b326925ad] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003501788s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4wbzn" [3bc2060a-b433-4c1c-81a6-002b326925ad] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004007824s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-897200 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-897200 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-897200 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-897200 -n old-k8s-version-897200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-897200 -n old-k8s-version-897200: exit status 2 (304.022916ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-897200 -n old-k8s-version-897200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-897200 -n old-k8s-version-897200: exit status 2 (300.209803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-897200 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-897200 -n old-k8s-version-897200
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-897200 -n old-k8s-version-897200
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (72.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-704766 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-704766 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m12.454357027s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (72.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l5w8s" [e7030641-df01-47a4-ada2-c161ac4363dd] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003473385s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l5w8s" [e7030641-df01-47a4-ada2-c161ac4363dd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002828407s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-726845 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-726845 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-726845 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-726845 -n no-preload-726845
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-726845 -n no-preload-726845: exit status 2 (343.843737ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-726845 -n no-preload-726845
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-726845 -n no-preload-726845: exit status 2 (338.947031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-726845 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-726845 -n no-preload-726845
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-726845 -n no-preload-726845
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-084179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-084179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m10.790454592s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-876577 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-876577 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (29.143815109s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0929 12:11:46.275271  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/functional-550377/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.084203542s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-876577 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-876577 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-876577 --alsologtostderr -v=3: (12.468254045s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-876577 -n newest-cni-876577
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-876577 -n newest-cni-876577: exit status 7 (70.179375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-876577 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-876577 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-876577 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (11.054874499s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-876577 -n newest-cni-876577
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-704766 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8f821014-0326-47df-aad1-5ba1fdb0de90] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8f821014-0326-47df-aad1-5ba1fdb0de90] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005234508s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-704766 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-704766 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-704766 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-049623 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-704766 --alsologtostderr -v=3
I0929 12:12:25.603058  747468 config.go:182] Loaded profile config "auto-049623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-704766 --alsologtostderr -v=3: (18.184553419s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-049623 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j9cn2" [df123b74-7d44-477b-90d8-ea717346f6ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-j9cn2" [df123b74-7d44-477b-90d8-ea717346f6ab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004330375s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-876577 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-876577 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-876577 -n newest-cni-876577
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-876577 -n newest-cni-876577: exit status 2 (325.563695ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-876577 -n newest-cni-876577
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-876577 -n newest-cni-876577: exit status 2 (322.905268ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-876577 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-876577 -n newest-cni-876577
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-876577 -n newest-cni-876577
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m11.541671849s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-049623 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-084179 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7c61e6f3-840a-4f31-a4a5-72e2b17eb6e0] Pending
helpers_test.go:352: "busybox" [7c61e6f3-840a-4f31-a4a5-72e2b17eb6e0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7c61e6f3-840a-4f31-a4a5-72e2b17eb6e0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004001337s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-084179 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-704766 -n embed-certs-704766
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-704766 -n embed-certs-704766: exit status 7 (76.40222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-704766 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-704766 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-704766 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (44.221660707s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-704766 -n embed-certs-704766
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-084179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-084179 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.001655598s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-084179 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-084179 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-084179 --alsologtostderr -v=3: (16.578809303s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (54.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (54.970674883s)
--- PASS: TestNetworkPlugins/group/calico/Start (54.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-084179 -n default-k8s-diff-port-084179
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-084179 -n default-k8s-diff-port-084179: exit status 7 (72.671791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-084179 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-084179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-084179 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (45.905317824s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-084179 -n default-k8s-diff-port-084179
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (46.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-28fch" [4a1c4173-5179-49b7-9ab7-1a503f608e28] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004403856s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-28fch" [4a1c4173-5179-49b7-9ab7-1a503f608e28] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003483087s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-704766 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-704766 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-704766 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-704766 -n embed-certs-704766
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-704766 -n embed-certs-704766: exit status 2 (302.054581ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-704766 -n embed-certs-704766
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-704766 -n embed-certs-704766: exit status 2 (348.024568ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-704766 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-704766 -n embed-certs-704766
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-704766 -n embed-certs-704766
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-5bdqg" [f527a2ef-ad48-452f-a02b-c982e37ff911] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004706346s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (56.419527078s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-76jkq" [1a346fbb-dc92-4902-a09f-860fab5a988d] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-76jkq" [1a346fbb-dc92-4902-a09f-860fab5a988d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004867617s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-049623 "pgrep -a kubelet"
I0929 12:13:49.805208  747468 config.go:182] Loaded profile config "kindnet-049623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-049623 replace --force -f testdata/netcat-deployment.yaml
I0929 12:13:50.326517  747468 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0929 12:13:50.778525  747468 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lt2vl" [aae3ca6f-c0bb-45d1-a54d-391e02a57faa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lt2vl" [aae3ca6f-c0bb-45d1-a54d-391e02a57faa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004143183s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wr2t5" [692bea25-c17d-4782-a55b-a00038b867ae] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004962578s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-049623 "pgrep -a kubelet"
I0929 12:13:55.261193  747468 config.go:182] Loaded profile config "calico-049623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-049623 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ntmdn" [f32c29f3-8dfb-4e72-a879-cf94b0b113c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ntmdn" [f32c29f3-8dfb-4e72-a879-cf94b0b113c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.005218706s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wr2t5" [692bea25-c17d-4782-a55b-a00038b867ae] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003547346s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-084179 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-049623 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-084179 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-084179 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-084179 -n default-k8s-diff-port-084179
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-084179 -n default-k8s-diff-port-084179: exit status 2 (372.967742ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-084179 -n default-k8s-diff-port-084179
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-084179 -n default-k8s-diff-port-084179: exit status 2 (363.793072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-084179 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-084179 -n default-k8s-diff-port-084179
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-084179 -n default-k8s-diff-port-084179
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.98s)
E0929 12:15:12.262656  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-049623 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m9.883625104s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (59.794047454s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0929 12:14:31.284832  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:31.291237  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:31.302698  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:31.324463  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:31.366011  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:31.447539  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:31.609753  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:31.931161  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:32.572816  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:33.855110  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:36.417288  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:41.539009  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-049623 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m2.610812547s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-049623 "pgrep -a kubelet"
I0929 12:14:42.786085  747468 config.go:182] Loaded profile config "custom-flannel-049623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-049623 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x7jzg" [6c4565ea-6fe0-408e-bf37-b2cd760980c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-x7jzg" [6c4565ea-6fe0-408e-bf37-b2cd760980c5] Running
E0929 12:14:51.780910  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/old-k8s-version-897200/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004479413s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-049623 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-049623 "pgrep -a kubelet"
I0929 12:15:18.988435  747468 config.go:182] Loaded profile config "enable-default-cni-049623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-049623 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c6kh4" [55475c2a-0d70-4fa1-8edc-e52645509137] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 12:15:19.809850  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/no-preload-726845/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-c6kh4" [55475c2a-0d70-4fa1-8edc-e52645509137] Running
E0929 12:15:24.923132  747468 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/addons-164332/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.00357499s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-s4w9l" [7753a9a9-bdea-41ba-8a17-27427490d13c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004583684s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-049623 "pgrep -a kubelet"
I0929 12:15:27.702557  747468 config.go:182] Loaded profile config "flannel-049623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-049623 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8kl52" [cc6f99a1-eb5d-4216-8b30-5d187ab7af07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8kl52" [cc6f99a1-eb5d-4216-8b30-5d187ab7af07] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004041159s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-049623 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-049623 "pgrep -a kubelet"
I0929 12:15:31.347444  747468 config.go:182] Loaded profile config "bridge-049623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-049623 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4nnnm" [297db089-cf86-464a-925d-978808d31d6a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4nnnm" [297db089-cf86-464a-925d-978808d31d6a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004297762s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-049623 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-049623 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-049623 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (27/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-164332 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-106218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-106218
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-049623 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-049623

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-049623

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-049623

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-049623

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-049623

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-049623

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-049623

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-049623

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-049623

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-049623

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-049623

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-049623" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-049623" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21655-743952/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 12:07:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-115625
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21655-743952/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 12:08:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-435881
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21655-743952/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 12:07:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-620142
contexts:
- context:
cluster: kubernetes-upgrade-115625
user: kubernetes-upgrade-115625
name: kubernetes-upgrade-115625
- context:
cluster: pause-435881
extensions:
- extension:
last-update: Mon, 29 Sep 2025 12:08:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-435881
name: pause-435881
- context:
cluster: running-upgrade-620142
user: running-upgrade-620142
name: running-upgrade-620142
current-context: pause-435881
kind: Config
users:
- name: kubernetes-upgrade-115625
user:
client-certificate: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/kubernetes-upgrade-115625/client.crt
client-key: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/kubernetes-upgrade-115625/client.key
- name: pause-435881
user:
client-certificate: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/pause-435881/client.crt
client-key: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/pause-435881/client.key
- name: running-upgrade-620142
user:
client-certificate: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/running-upgrade-620142/client.crt
client-key: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/running-upgrade-620142/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-049623

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-049623"

                                                
                                                
----------------------- debugLogs end: kubenet-049623 [took: 3.011776185s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-049623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-049623
--- SKIP: TestNetworkPlugins/group/kubenet (3.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-049623 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-049623" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21655-743952/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 12:07:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-115625
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21655-743952/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 12:08:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: pause-435881
contexts:
- context:
cluster: kubernetes-upgrade-115625
user: kubernetes-upgrade-115625
name: kubernetes-upgrade-115625
- context:
cluster: pause-435881
extensions:
- extension:
last-update: Mon, 29 Sep 2025 12:08:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-435881
name: pause-435881
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-115625
user:
client-certificate: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/kubernetes-upgrade-115625/client.crt
client-key: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/kubernetes-upgrade-115625/client.key
- name: pause-435881
user:
client-certificate: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/pause-435881/client.crt
client-key: /home/jenkins/minikube-integration/21655-743952/.minikube/profiles/pause-435881/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-049623

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-049623" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-049623"

                                                
                                                
----------------------- debugLogs end: cilium-049623 [took: 4.921899207s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-049623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-049623
--- SKIP: TestNetworkPlugins/group/cilium (5.08s)

                                                
                                    
Copied to clipboard