Test Report: Docker_Linux_crio 21657

                    
                      666c3351e3298333ddd2e3f0587bd3e8ac91c0cd:2025-09-29:41679
                    
                

Test fail (7/332)

x
+
TestAddons/parallel/Ingress (152.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-300979 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-300979 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-300979 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [6f41570f-fd5b-4b80-9ed2-02e7cd1e28b4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [6f41570f-fd5b-4b80-9ed2-02e7cd1e28b4] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.009218917s
I0929 10:22:38.871151    7117 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-300979 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.135005786s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-300979 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-300979
helpers_test.go:243: (dbg) docker inspect addons-300979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2bb8014912915f478f872f41c4c78d3932a938b4fa85d840a461f12149deebe3",
	        "Created": "2025-09-29T10:20:08.200462941Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 9078,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:20:08.234860149Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/2bb8014912915f478f872f41c4c78d3932a938b4fa85d840a461f12149deebe3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2bb8014912915f478f872f41c4c78d3932a938b4fa85d840a461f12149deebe3/hostname",
	        "HostsPath": "/var/lib/docker/containers/2bb8014912915f478f872f41c4c78d3932a938b4fa85d840a461f12149deebe3/hosts",
	        "LogPath": "/var/lib/docker/containers/2bb8014912915f478f872f41c4c78d3932a938b4fa85d840a461f12149deebe3/2bb8014912915f478f872f41c4c78d3932a938b4fa85d840a461f12149deebe3-json.log",
	        "Name": "/addons-300979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-300979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-300979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2bb8014912915f478f872f41c4c78d3932a938b4fa85d840a461f12149deebe3",
	                "LowerDir": "/var/lib/docker/overlay2/e5d3207831ec758acecea3a7f44ae739956b3477f0e870063c6e9650e73a5ee8-init/diff:/var/lib/docker/overlay2/c7fa3299f755c710ae989985ad7ce5a1ce038c1f2be50e7356b276800d2744f7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5d3207831ec758acecea3a7f44ae739956b3477f0e870063c6e9650e73a5ee8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5d3207831ec758acecea3a7f44ae739956b3477f0e870063c6e9650e73a5ee8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5d3207831ec758acecea3a7f44ae739956b3477f0e870063c6e9650e73a5ee8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-300979",
	                "Source": "/var/lib/docker/volumes/addons-300979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-300979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-300979",
	                "name.minikube.sigs.k8s.io": "addons-300979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b07831dcbaefa75a067e61b4de1e1f1b844cf6865adb6b9b25688701e67f483",
	            "SandboxKey": "/var/run/docker/netns/8b07831dcbae",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-300979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:be:29:bb:2b:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9d88cc2e0d49db3261be89e6a1e88a391f33b6e8295814012c7d6fb85e0f34bf",
	                    "EndpointID": "e50a4cd7b998117d30ea87f53820c759b5018b8f235ac09094eb31baf757e904",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-300979",
	                        "2bb801491291"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-300979 -n addons-300979
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-300979 logs -n 25: (1.207334027s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-935346 --alsologtostderr --binary-mirror http://127.0.0.1:44827 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-935346 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ delete  │ -p binary-mirror-935346                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-935346 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ addons  │ enable dashboard -p addons-300979                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ addons  │ disable dashboard -p addons-300979                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ start   │ -p addons-300979 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-300979 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-300979 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-300979 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-300979 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-300979 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-300979                                                                                                                                                                                                                                                                                                                                                                                           │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-300979 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ enable headlamp -p addons-300979 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ ssh     │ addons-300979 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │                     │
	│ ip      │ addons-300979 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-300979 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-300979 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-300979 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-300979 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ addons  │ addons-300979 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:22 UTC │ 29 Sep 25 10:22 UTC │
	│ ssh     │ addons-300979 ssh cat /opt/local-path-provisioner/pvc-3b9bdaf6-b097-4b55-ac46-9361c664e909_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-300979 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-300979 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ addons  │ addons-300979 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:23 UTC │ 29 Sep 25 10:23 UTC │
	│ ip      │ addons-300979 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-300979        │ jenkins │ v1.37.0 │ 29 Sep 25 10:24 UTC │ 29 Sep 25 10:24 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:19:43
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:19:43.766893    8435 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:19:43.766993    8435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:43.767001    8435 out.go:374] Setting ErrFile to fd 2...
	I0929 10:19:43.767005    8435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:43.767172    8435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
	I0929 10:19:43.767711    8435 out.go:368] Setting JSON to false
	I0929 10:19:43.768517    8435 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":128,"bootTime":1759141056,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:19:43.768595    8435 start.go:140] virtualization: kvm guest
	I0929 10:19:43.770565    8435 out.go:179] * [addons-300979] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:19:43.771721    8435 notify.go:220] Checking for updates...
	I0929 10:19:43.771751    8435 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:19:43.772712    8435 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:19:43.773790    8435 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig
	I0929 10:19:43.774850    8435 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube
	I0929 10:19:43.775906    8435 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:19:43.776888    8435 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:19:43.778167    8435 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:19:43.803016    8435 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:19:43.803138    8435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:19:43.853345    8435 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:19:43.843759898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:19:43.853446    8435 docker.go:318] overlay module found
	I0929 10:19:43.855323    8435 out.go:179] * Using the docker driver based on user configuration
	I0929 10:19:43.856350    8435 start.go:304] selected driver: docker
	I0929 10:19:43.856365    8435 start.go:924] validating driver "docker" against <nil>
	I0929 10:19:43.856378    8435 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:19:43.856952    8435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:19:43.911285    8435 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-29 10:19:43.901519927 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:19:43.911458    8435 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:19:43.911684    8435 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:19:43.913360    8435 out.go:179] * Using Docker driver with root privileges
	I0929 10:19:43.914475    8435 cni.go:84] Creating CNI manager for ""
	I0929 10:19:43.914543    8435 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:19:43.914556    8435 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 10:19:43.914642    8435 start.go:348] cluster config:
	{Name:addons-300979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-300979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0929 10:19:43.915910    8435 out.go:179] * Starting "addons-300979" primary control-plane node in "addons-300979" cluster
	I0929 10:19:43.916975    8435 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 10:19:43.918193    8435 out.go:179] * Pulling base image v0.0.48 ...
	I0929 10:19:43.919086    8435 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:19:43.919119    8435 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-3615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 10:19:43.919121    8435 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:19:43.919128    8435 cache.go:58] Caching tarball of preloaded images
	I0929 10:19:43.919193    8435 preload.go:172] Found /home/jenkins/minikube-integration/21657-3615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0929 10:19:43.919203    8435 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 10:19:43.919481    8435 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/config.json ...
	I0929 10:19:43.919506    8435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/config.json: {Name:mk4a484bb97721ed62e356fe384e2736545a8bea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:19:43.935231    8435 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:19:43.935358    8435 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:19:43.935384    8435 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 10:19:43.935391    8435 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 10:19:43.935398    8435 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 10:19:43.935410    8435 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0929 10:19:56.504027    8435 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0929 10:19:56.504063    8435 cache.go:232] Successfully downloaded all kic artifacts
	I0929 10:19:56.504096    8435 start.go:360] acquireMachinesLock for addons-300979: {Name:mk82c69ffb9eada82feadfbce9fe84230cedecaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 10:19:56.504186    8435 start.go:364] duration metric: took 71.016µs to acquireMachinesLock for "addons-300979"
	I0929 10:19:56.504207    8435 start.go:93] Provisioning new machine with config: &{Name:addons-300979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-300979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:19:56.504266    8435 start.go:125] createHost starting for "" (driver="docker")
	I0929 10:19:56.506037    8435 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0929 10:19:56.506229    8435 start.go:159] libmachine.API.Create for "addons-300979" (driver="docker")
	I0929 10:19:56.506257    8435 client.go:168] LocalClient.Create starting
	I0929 10:19:56.506358    8435 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21657-3615/.minikube/certs/ca.pem
	I0929 10:19:56.675383    8435 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21657-3615/.minikube/certs/cert.pem
	I0929 10:19:56.910275    8435 cli_runner.go:164] Run: docker network inspect addons-300979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 10:19:56.927184    8435 cli_runner.go:211] docker network inspect addons-300979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 10:19:56.927259    8435 network_create.go:284] running [docker network inspect addons-300979] to gather additional debugging logs...
	I0929 10:19:56.927283    8435 cli_runner.go:164] Run: docker network inspect addons-300979
	W0929 10:19:56.943517    8435 cli_runner.go:211] docker network inspect addons-300979 returned with exit code 1
	I0929 10:19:56.943556    8435 network_create.go:287] error running [docker network inspect addons-300979]: docker network inspect addons-300979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-300979 not found
	I0929 10:19:56.943570    8435 network_create.go:289] output of [docker network inspect addons-300979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-300979 not found
	
	** /stderr **
	I0929 10:19:56.943657    8435 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:19:56.959599    8435 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013bb730}
	I0929 10:19:56.959637    8435 network_create.go:124] attempt to create docker network addons-300979 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 10:19:56.959682    8435 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-300979 addons-300979
	I0929 10:19:57.352640    8435 network_create.go:108] docker network addons-300979 192.168.49.0/24 created
	I0929 10:19:57.352672    8435 kic.go:121] calculated static IP "192.168.49.2" for the "addons-300979" container
	I0929 10:19:57.352727    8435 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 10:19:57.369049    8435 cli_runner.go:164] Run: docker volume create addons-300979 --label name.minikube.sigs.k8s.io=addons-300979 --label created_by.minikube.sigs.k8s.io=true
	I0929 10:19:57.540369    8435 oci.go:103] Successfully created a docker volume addons-300979
	I0929 10:19:57.540475    8435 cli_runner.go:164] Run: docker run --rm --name addons-300979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-300979 --entrypoint /usr/bin/test -v addons-300979:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 10:20:03.883759    8435 cli_runner.go:217] Completed: docker run --rm --name addons-300979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-300979 --entrypoint /usr/bin/test -v addons-300979:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (6.343231854s)
	I0929 10:20:03.883792    8435 oci.go:107] Successfully prepared a docker volume addons-300979
	I0929 10:20:03.883819    8435 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:20:03.883840    8435 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 10:20:03.883918    8435 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21657-3615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-300979:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 10:20:08.130733    8435 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21657-3615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-300979:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.246762909s)
	I0929 10:20:08.130767    8435 kic.go:203] duration metric: took 4.246924629s to extract preloaded images to volume ...
	W0929 10:20:08.130857    8435 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 10:20:08.130912    8435 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 10:20:08.130949    8435 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 10:20:08.184014    8435 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-300979 --name addons-300979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-300979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-300979 --network addons-300979 --ip 192.168.49.2 --volume addons-300979:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 10:20:08.471383    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Running}}
	I0929 10:20:08.489489    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:08.507568    8435 cli_runner.go:164] Run: docker exec addons-300979 stat /var/lib/dpkg/alternatives/iptables
	I0929 10:20:08.554071    8435 oci.go:144] the created container "addons-300979" has a running status.
	I0929 10:20:08.554102    8435 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa...
	I0929 10:20:08.838491    8435 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 10:20:08.863746    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:08.883129    8435 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 10:20:08.883158    8435 kic_runner.go:114] Args: [docker exec --privileged addons-300979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 10:20:08.928572    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:08.946250    8435 machine.go:93] provisionDockerMachine start ...
	I0929 10:20:08.946330    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:08.964484    8435 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:08.964766    8435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0929 10:20:08.964780    8435 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 10:20:09.098340    8435 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-300979
	
	I0929 10:20:09.098367    8435 ubuntu.go:182] provisioning hostname "addons-300979"
	I0929 10:20:09.098419    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:09.116067    8435 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:09.116300    8435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0929 10:20:09.116315    8435 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-300979 && echo "addons-300979" | sudo tee /etc/hostname
	I0929 10:20:09.261239    8435 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-300979
	
	I0929 10:20:09.261303    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:09.278214    8435 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:09.278409    8435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0929 10:20:09.278428    8435 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-300979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-300979/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-300979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 10:20:09.412319    8435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 10:20:09.412351    8435 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21657-3615/.minikube CaCertPath:/home/jenkins/minikube-integration/21657-3615/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21657-3615/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21657-3615/.minikube}
	I0929 10:20:09.412387    8435 ubuntu.go:190] setting up certificates
	I0929 10:20:09.412400    8435 provision.go:84] configureAuth start
	I0929 10:20:09.412454    8435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-300979
	I0929 10:20:09.429826    8435 provision.go:143] copyHostCerts
	I0929 10:20:09.429923    8435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3615/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21657-3615/.minikube/cert.pem (1123 bytes)
	I0929 10:20:09.430038    8435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3615/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21657-3615/.minikube/key.pem (1679 bytes)
	I0929 10:20:09.430100    8435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21657-3615/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21657-3615/.minikube/ca.pem (1082 bytes)
	I0929 10:20:09.430151    8435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21657-3615/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21657-3615/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21657-3615/.minikube/certs/ca-key.pem org=jenkins.addons-300979 san=[127.0.0.1 192.168.49.2 addons-300979 localhost minikube]
	I0929 10:20:09.549382    8435 provision.go:177] copyRemoteCerts
	I0929 10:20:09.549444    8435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 10:20:09.549482    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:09.566798    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:09.661989    8435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3615/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 10:20:09.687070    8435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3615/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 10:20:09.710672    8435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3615/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 10:20:09.734192    8435 provision.go:87] duration metric: took 321.77999ms to configureAuth
	I0929 10:20:09.734219    8435 ubuntu.go:206] setting minikube options for container-runtime
	I0929 10:20:09.734370    8435 config.go:182] Loaded profile config "addons-300979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:20:09.734458    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:09.751832    8435 main.go:141] libmachine: Using SSH client type: native
	I0929 10:20:09.752107    8435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0929 10:20:09.752130    8435 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0929 10:20:09.985067    8435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0929 10:20:09.985092    8435 machine.go:96] duration metric: took 1.03882041s to provisionDockerMachine
	I0929 10:20:09.985102    8435 client.go:171] duration metric: took 13.478837657s to LocalClient.Create
	I0929 10:20:09.985121    8435 start.go:167] duration metric: took 13.478897404s to libmachine.API.Create "addons-300979"
	I0929 10:20:09.985128    8435 start.go:293] postStartSetup for "addons-300979" (driver="docker")
	I0929 10:20:09.985137    8435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 10:20:09.985188    8435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 10:20:09.985226    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:10.003232    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:10.100976    8435 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 10:20:10.104404    8435 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 10:20:10.104441    8435 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 10:20:10.104454    8435 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 10:20:10.104463    8435 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 10:20:10.104478    8435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3615/.minikube/addons for local assets ...
	I0929 10:20:10.104552    8435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21657-3615/.minikube/files for local assets ...
	I0929 10:20:10.104586    8435 start.go:296] duration metric: took 119.450984ms for postStartSetup
	I0929 10:20:10.104916    8435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-300979
	I0929 10:20:10.122454    8435 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/config.json ...
	I0929 10:20:10.122742    8435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:20:10.122813    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:10.139838    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:10.231894    8435 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 10:20:10.236214    8435 start.go:128] duration metric: took 13.731935539s to createHost
	I0929 10:20:10.236239    8435 start.go:83] releasing machines lock for "addons-300979", held for 13.732043421s
	I0929 10:20:10.236309    8435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-300979
	I0929 10:20:10.253933    8435 ssh_runner.go:195] Run: cat /version.json
	I0929 10:20:10.253997    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:10.253933    8435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 10:20:10.254126    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:10.272666    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:10.273050    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:10.434993    8435 ssh_runner.go:195] Run: systemctl --version
	I0929 10:20:10.439785    8435 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0929 10:20:10.578315    8435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 10:20:10.582801    8435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:20:10.604457    8435 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0929 10:20:10.604552    8435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 10:20:10.632083    8435 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 10:20:10.632107    8435 start.go:495] detecting cgroup driver to use...
	I0929 10:20:10.632134    8435 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 10:20:10.632174    8435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 10:20:10.645919    8435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 10:20:10.656542    8435 docker.go:218] disabling cri-docker service (if available) ...
	I0929 10:20:10.656601    8435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0929 10:20:10.669533    8435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0929 10:20:10.682718    8435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0929 10:20:10.747412    8435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0929 10:20:10.817943    8435 docker.go:234] disabling docker service ...
	I0929 10:20:10.818003    8435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0929 10:20:10.834370    8435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0929 10:20:10.845460    8435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0929 10:20:10.912772    8435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0929 10:20:11.026173    8435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 10:20:11.037903    8435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 10:20:11.054546    8435 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0929 10:20:11.054606    8435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:11.066716    8435 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0929 10:20:11.066784    8435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:11.076535    8435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:11.085809    8435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:11.095302    8435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 10:20:11.104184    8435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:11.113654    8435 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:11.129692    8435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0929 10:20:11.139620    8435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 10:20:11.147638    8435 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0929 10:20:11.147683    8435 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0929 10:20:11.159384    8435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 10:20:11.167658    8435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:11.274704    8435 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0929 10:20:11.365599    8435 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0929 10:20:11.365674    8435 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0929 10:20:11.369617    8435 start.go:563] Will wait 60s for crictl version
	I0929 10:20:11.369680    8435 ssh_runner.go:195] Run: which crictl
	I0929 10:20:11.373077    8435 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 10:20:11.407902    8435 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0929 10:20:11.408026    8435 ssh_runner.go:195] Run: crio --version
	I0929 10:20:11.440987    8435 ssh_runner.go:195] Run: crio --version
	I0929 10:20:11.476033    8435 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0929 10:20:11.477195    8435 cli_runner.go:164] Run: docker network inspect addons-300979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 10:20:11.493235    8435 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 10:20:11.496842    8435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:20:11.507713    8435 kubeadm.go:875] updating cluster {Name:addons-300979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-300979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 10:20:11.507808    8435 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:20:11.507856    8435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:20:11.571808    8435 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 10:20:11.571830    8435 crio.go:433] Images already preloaded, skipping extraction
	I0929 10:20:11.571896    8435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0929 10:20:11.603843    8435 crio.go:514] all images are preloaded for cri-o runtime.
	I0929 10:20:11.603866    8435 cache_images.go:85] Images are preloaded, skipping loading
	I0929 10:20:11.603887    8435 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0929 10:20:11.603967    8435 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-300979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-300979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 10:20:11.604028    8435 ssh_runner.go:195] Run: crio config
	I0929 10:20:11.643362    8435 cni.go:84] Creating CNI manager for ""
	I0929 10:20:11.643389    8435 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:20:11.643411    8435 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 10:20:11.643437    8435 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-300979 NodeName:addons-300979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 10:20:11.643542    8435 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-300979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 10:20:11.643595    8435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 10:20:11.652575    8435 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 10:20:11.652626    8435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 10:20:11.660959    8435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0929 10:20:11.677830    8435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 10:20:11.697096    8435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0929 10:20:11.713864    8435 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0929 10:20:11.717212    8435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 10:20:11.727567    8435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:11.785976    8435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:20:11.813159    8435 certs.go:68] Setting up /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979 for IP: 192.168.49.2
	I0929 10:20:11.813188    8435 certs.go:194] generating shared ca certs ...
	I0929 10:20:11.813207    8435 certs.go:226] acquiring lock for ca certs: {Name:mk978420e70adcc3b732b8c55ab002a337dd20fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:11.813340    8435 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21657-3615/.minikube/ca.key
	I0929 10:20:11.918506    8435 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3615/.minikube/ca.crt ...
	I0929 10:20:11.918538    8435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3615/.minikube/ca.crt: {Name:mk45a73f6316a5043f959a20d92051998efe1493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:11.918702    8435 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3615/.minikube/ca.key ...
	I0929 10:20:11.918713    8435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3615/.minikube/ca.key: {Name:mk3ea7c728e2a24d96302839c5aa1037f072c0b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:11.918782    8435 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21657-3615/.minikube/proxy-client-ca.key
	I0929 10:20:12.010623    8435 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3615/.minikube/proxy-client-ca.crt ...
	I0929 10:20:12.010650    8435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3615/.minikube/proxy-client-ca.crt: {Name:mkc0ab0d833d646fb3441d51f0d4a7d6dbb73d45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:12.010809    8435 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3615/.minikube/proxy-client-ca.key ...
	I0929 10:20:12.010820    8435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3615/.minikube/proxy-client-ca.key: {Name:mk279746e1ed61ed39a6dadb52c48e481ed51bf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:12.010897    8435 certs.go:256] generating profile certs ...
	I0929 10:20:12.010964    8435 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.key
	I0929 10:20:12.010978    8435 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt with IP's: []
	I0929 10:20:12.055596    8435 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt ...
	I0929 10:20:12.055619    8435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: {Name:mkde57ad2401a6098ab6f7aeb10f232044b95c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:12.055759    8435 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.key ...
	I0929 10:20:12.055769    8435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.key: {Name:mk28f21243cbb8a3dd6de408de7e825d24e91c21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:12.055839    8435 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/apiserver.key.44d77551
	I0929 10:20:12.055857    8435 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/apiserver.crt.44d77551 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0929 10:20:12.420068    8435 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/apiserver.crt.44d77551 ...
	I0929 10:20:12.420100    8435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/apiserver.crt.44d77551: {Name:mk432cd7f5fbabb275d5bbfcbb17e5a695efdfc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:12.420268    8435 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/apiserver.key.44d77551 ...
	I0929 10:20:12.420280    8435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/apiserver.key.44d77551: {Name:mk95a06307d77bff80f40f52a6029f72aefe201d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:12.420350    8435 certs.go:381] copying /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/apiserver.crt.44d77551 -> /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/apiserver.crt
	I0929 10:20:12.420424    8435 certs.go:385] copying /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/apiserver.key.44d77551 -> /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/apiserver.key
	I0929 10:20:12.420472    8435 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/proxy-client.key
	I0929 10:20:12.420489    8435 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/proxy-client.crt with IP's: []
	I0929 10:20:12.531918    8435 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/proxy-client.crt ...
	I0929 10:20:12.531950    8435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/proxy-client.crt: {Name:mk29469acb71c115896f741cdd1e6942e56a3f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:12.532113    8435 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/proxy-client.key ...
	I0929 10:20:12.532123    8435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/proxy-client.key: {Name:mk567f7290e14f771868ad32adc7d53618d4e2aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:12.532287    8435 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3615/.minikube/certs/ca-key.pem (1679 bytes)
	I0929 10:20:12.532320    8435 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3615/.minikube/certs/ca.pem (1082 bytes)
	I0929 10:20:12.532343    8435 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3615/.minikube/certs/cert.pem (1123 bytes)
	I0929 10:20:12.532366    8435 certs.go:484] found cert: /home/jenkins/minikube-integration/21657-3615/.minikube/certs/key.pem (1679 bytes)
	I0929 10:20:12.532859    8435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3615/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 10:20:12.557345    8435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3615/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 10:20:12.581824    8435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3615/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 10:20:12.605567    8435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3615/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0929 10:20:12.628441    8435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 10:20:12.651585    8435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 10:20:12.674085    8435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 10:20:12.696252    8435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 10:20:12.718712    8435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21657-3615/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 10:20:12.743978    8435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 10:20:12.760894    8435 ssh_runner.go:195] Run: openssl version
	I0929 10:20:12.766051    8435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 10:20:12.777848    8435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:12.781275    8435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:12.781328    8435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 10:20:12.787766    8435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 10:20:12.796735    8435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 10:20:12.799814    8435 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 10:20:12.799861    8435 kubeadm.go:392] StartCluster: {Name:addons-300979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-300979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:20:12.799952    8435 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0929 10:20:12.800014    8435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0929 10:20:12.832392    8435 cri.go:89] found id: ""
	I0929 10:20:12.832453    8435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 10:20:12.841507    8435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 10:20:12.850168    8435 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 10:20:12.850219    8435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 10:20:12.859155    8435 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 10:20:12.859174    8435 kubeadm.go:157] found existing configuration files:
	
	I0929 10:20:12.859208    8435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 10:20:12.867708    8435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 10:20:12.867751    8435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 10:20:12.875938    8435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 10:20:12.884095    8435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 10:20:12.884143    8435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 10:20:12.892166    8435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 10:20:12.900405    8435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 10:20:12.900450    8435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 10:20:12.908991    8435 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 10:20:12.917168    8435 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 10:20:12.917214    8435 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 10:20:12.925324    8435 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 10:20:12.961992    8435 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 10:20:12.962059    8435 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 10:20:12.976804    8435 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 10:20:12.976907    8435 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 10:20:12.976969    8435 kubeadm.go:310] OS: Linux
	I0929 10:20:12.977026    8435 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 10:20:12.977107    8435 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 10:20:12.977188    8435 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 10:20:12.977259    8435 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 10:20:12.977340    8435 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 10:20:12.977406    8435 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 10:20:12.977480    8435 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 10:20:12.977552    8435 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 10:20:13.024703    8435 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 10:20:13.024805    8435 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 10:20:13.024961    8435 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 10:20:13.031313    8435 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 10:20:13.033810    8435 out.go:252]   - Generating certificates and keys ...
	I0929 10:20:13.033907    8435 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 10:20:13.033999    8435 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 10:20:13.192665    8435 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 10:20:13.744988    8435 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 10:20:14.130810    8435 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 10:20:14.591862    8435 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 10:20:14.943476    8435 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 10:20:14.943652    8435 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-300979 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:20:15.170285    8435 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 10:20:15.170407    8435 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-300979 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 10:20:15.315458    8435 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 10:20:15.553527    8435 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 10:20:15.795210    8435 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 10:20:15.795289    8435 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 10:20:16.128925    8435 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 10:20:16.265611    8435 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 10:20:16.486510    8435 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 10:20:17.298331    8435 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 10:20:17.408962    8435 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 10:20:17.409551    8435 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 10:20:17.413655    8435 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 10:20:17.415490    8435 out.go:252]   - Booting up control plane ...
	I0929 10:20:17.415622    8435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 10:20:17.415731    8435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 10:20:17.416475    8435 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 10:20:17.425368    8435 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 10:20:17.425507    8435 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 10:20:17.431281    8435 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 10:20:17.431497    8435 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 10:20:17.431543    8435 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 10:20:17.513808    8435 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 10:20:17.514024    8435 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 10:20:18.015454    8435 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.776627ms
	I0929 10:20:18.019526    8435 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 10:20:18.019643    8435 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 10:20:18.019787    8435 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 10:20:18.019887    8435 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 10:20:19.816028    8435 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.796439337s
	I0929 10:20:20.496401    8435 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.47683341s
	I0929 10:20:22.021681    8435 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.002045662s
	I0929 10:20:22.032520    8435 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 10:20:22.041716    8435 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 10:20:22.049651    8435 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 10:20:22.049978    8435 kubeadm.go:310] [mark-control-plane] Marking the node addons-300979 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 10:20:22.056820    8435 kubeadm.go:310] [bootstrap-token] Using token: m3lqn1.wxafwqcyphy39w46
	I0929 10:20:22.058103    8435 out.go:252]   - Configuring RBAC rules ...
	I0929 10:20:22.058276    8435 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 10:20:22.061558    8435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 10:20:22.066190    8435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 10:20:22.068358    8435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 10:20:22.070694    8435 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 10:20:22.072848    8435 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 10:20:22.427517    8435 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 10:20:22.843136    8435 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 10:20:23.428671    8435 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 10:20:23.429430    8435 kubeadm.go:310] 
	I0929 10:20:23.429547    8435 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 10:20:23.429567    8435 kubeadm.go:310] 
	I0929 10:20:23.429709    8435 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 10:20:23.429727    8435 kubeadm.go:310] 
	I0929 10:20:23.429754    8435 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 10:20:23.429859    8435 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 10:20:23.429957    8435 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 10:20:23.429979    8435 kubeadm.go:310] 
	I0929 10:20:23.430065    8435 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 10:20:23.430080    8435 kubeadm.go:310] 
	I0929 10:20:23.430153    8435 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 10:20:23.430162    8435 kubeadm.go:310] 
	I0929 10:20:23.430234    8435 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 10:20:23.430359    8435 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 10:20:23.430470    8435 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 10:20:23.430483    8435 kubeadm.go:310] 
	I0929 10:20:23.430614    8435 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 10:20:23.430730    8435 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 10:20:23.430740    8435 kubeadm.go:310] 
	I0929 10:20:23.430856    8435 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m3lqn1.wxafwqcyphy39w46 \
	I0929 10:20:23.431018    8435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0474627f69595a81b485f1a0b43969df121cfebfa6bceacc6be88504587ad0bb \
	I0929 10:20:23.431047    8435 kubeadm.go:310] 	--control-plane 
	I0929 10:20:23.431063    8435 kubeadm.go:310] 
	I0929 10:20:23.431185    8435 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 10:20:23.431194    8435 kubeadm.go:310] 
	I0929 10:20:23.431272    8435 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m3lqn1.wxafwqcyphy39w46 \
	I0929 10:20:23.431391    8435 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0474627f69595a81b485f1a0b43969df121cfebfa6bceacc6be88504587ad0bb 
	I0929 10:20:23.433402    8435 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 10:20:23.433499    8435 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 10:20:23.433528    8435 cni.go:84] Creating CNI manager for ""
	I0929 10:20:23.433538    8435 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:20:23.435671    8435 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 10:20:23.436742    8435 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 10:20:23.440536    8435 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 10:20:23.440557    8435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 10:20:23.459810    8435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 10:20:23.670398    8435 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 10:20:23.670494    8435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:23.670796    8435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-300979 minikube.k8s.io/updated_at=2025_09_29T10_20_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170 minikube.k8s.io/name=addons-300979 minikube.k8s.io/primary=true
	I0929 10:20:23.745574    8435 ops.go:34] apiserver oom_adj: -16
	I0929 10:20:23.745654    8435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:24.246655    8435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:24.746689    8435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:25.245760    8435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:25.745787    8435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:26.246189    8435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:26.746057    8435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:27.245923    8435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:27.746536    8435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:28.245691    8435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 10:20:28.308972    8435 kubeadm.go:1105] duration metric: took 4.638522305s to wait for elevateKubeSystemPrivileges
	I0929 10:20:28.309010    8435 kubeadm.go:394] duration metric: took 15.509153871s to StartCluster
	I0929 10:20:28.309028    8435 settings.go:142] acquiring lock: {Name:mk6ee080d6e685911798df324cd6ca69078a896a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:28.309127    8435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21657-3615/kubeconfig
	I0929 10:20:28.309547    8435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3615/kubeconfig: {Name:mkda9ba1384e256b26be96e8e58b60c08877f346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:20:28.309729    8435 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 10:20:28.309753    8435 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0929 10:20:28.309817    8435 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0929 10:20:28.309948    8435 addons.go:69] Setting inspektor-gadget=true in profile "addons-300979"
	I0929 10:20:28.309968    8435 addons.go:69] Setting yakd=true in profile "addons-300979"
	I0929 10:20:28.309983    8435 addons.go:238] Setting addon inspektor-gadget=true in "addons-300979"
	I0929 10:20:28.309994    8435 addons.go:69] Setting metrics-server=true in profile "addons-300979"
	I0929 10:20:28.309985    8435 addons.go:69] Setting default-storageclass=true in profile "addons-300979"
	I0929 10:20:28.310030    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.310030    8435 config.go:182] Loaded profile config "addons-300979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:20:28.310040    8435 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-300979"
	I0929 10:20:28.310044    8435 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-300979"
	I0929 10:20:28.310056    8435 addons.go:69] Setting registry=true in profile "addons-300979"
	I0929 10:20:28.310034    8435 addons.go:238] Setting addon metrics-server=true in "addons-300979"
	I0929 10:20:28.310084    8435 addons.go:238] Setting addon registry=true in "addons-300979"
	I0929 10:20:28.310107    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.310114    8435 addons.go:69] Setting registry-creds=true in profile "addons-300979"
	I0929 10:20:28.310095    8435 addons.go:69] Setting ingress=true in profile "addons-300979"
	I0929 10:20:28.310127    8435 addons.go:238] Setting addon registry-creds=true in "addons-300979"
	I0929 10:20:28.310139    8435 addons.go:238] Setting addon ingress=true in "addons-300979"
	I0929 10:20:28.310150    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.310165    8435 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-300979"
	I0929 10:20:28.310179    8435 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-300979"
	I0929 10:20:28.310208    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.310208    8435 addons.go:69] Setting ingress-dns=true in profile "addons-300979"
	I0929 10:20:28.310253    8435 addons.go:69] Setting gcp-auth=true in profile "addons-300979"
	I0929 10:20:28.310275    8435 mustload.go:65] Loading cluster: addons-300979
	I0929 10:20:28.310282    8435 addons.go:238] Setting addon ingress-dns=true in "addons-300979"
	I0929 10:20:28.310343    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.310443    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.310443    8435 config.go:182] Loaded profile config "addons-300979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:20:28.310459    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.310589    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.310600    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.310678    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.310683    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.310725    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.310796    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.310240    8435 addons.go:69] Setting storage-provisioner=true in profile "addons-300979"
	I0929 10:20:28.311025    8435 addons.go:238] Setting addon storage-provisioner=true in "addons-300979"
	I0929 10:20:28.311060    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.309987    8435 addons.go:238] Setting addon yakd=true in "addons-300979"
	I0929 10:20:28.311452    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.311534    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.311920    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.312087    8435 out.go:179] * Verifying Kubernetes components...
	I0929 10:20:28.310109    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.310052    8435 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-300979"
	I0929 10:20:28.312462    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.312920    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.312150    8435 addons.go:69] Setting volumesnapshots=true in profile "addons-300979"
	I0929 10:20:28.313001    8435 addons.go:238] Setting addon volumesnapshots=true in "addons-300979"
	I0929 10:20:28.313047    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.313065    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.312160    8435 addons.go:69] Setting volcano=true in profile "addons-300979"
	I0929 10:20:28.316264    8435 addons.go:238] Setting addon volcano=true in "addons-300979"
	I0929 10:20:28.316301    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.316789    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.312190    8435 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-300979"
	I0929 10:20:28.317270    8435 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-300979"
	I0929 10:20:28.317297    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.312195    8435 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-300979"
	I0929 10:20:28.317513    8435 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-300979"
	I0929 10:20:28.317545    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.317738    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.312199    8435 addons.go:69] Setting cloud-spanner=true in profile "addons-300979"
	I0929 10:20:28.317916    8435 addons.go:238] Setting addon cloud-spanner=true in "addons-300979"
	I0929 10:20:28.317955    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.319854    8435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 10:20:28.319942    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.320836    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.326257    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.366049    8435 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0929 10:20:28.367323    8435 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0929 10:20:28.367354    8435 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0929 10:20:28.367420    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.367673    8435 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0929 10:20:28.368823    8435 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0929 10:20:28.370240    8435 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0929 10:20:28.371621    8435 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0929 10:20:28.372668    8435 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0929 10:20:28.373424    8435 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-300979"
	I0929 10:20:28.373480    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.374388    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.374672    8435 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0929 10:20:28.377912    8435 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0929 10:20:28.378978    8435 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0929 10:20:28.380194    8435 out.go:179]   - Using image docker.io/registry:3.0.0
	I0929 10:20:28.380313    8435 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0929 10:20:28.381642    8435 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0929 10:20:28.381663    8435 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0929 10:20:28.381706    8435 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0929 10:20:28.381720    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.381720    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0929 10:20:28.381773    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.383108    8435 addons.go:238] Setting addon default-storageclass=true in "addons-300979"
	I0929 10:20:28.386990    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.387493    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:28.389607    8435 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0929 10:20:28.392997    8435 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0929 10:20:28.394010    8435 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0929 10:20:28.394026    8435 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0929 10:20:28.394090    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	W0929 10:20:28.395881    8435 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0929 10:20:28.397203    8435 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 10:20:28.397225    8435 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 10:20:28.397283    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.399335    8435 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0929 10:20:28.402079    8435 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:20:28.402147    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0929 10:20:28.402274    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.402534    8435 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0929 10:20:28.403710    8435 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:20:28.404699    8435 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 10:20:28.405751    8435 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:20:28.405773    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 10:20:28.405836    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.406196    8435 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:20:28.407716    8435 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:20:28.407736    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0929 10:20:28.407787    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.405758    8435 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0929 10:20:28.409989    8435 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0929 10:20:28.410965    8435 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:20:28.411104    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0929 10:20:28.411164    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.418924    8435 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0929 10:20:28.418926    8435 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0929 10:20:28.420495    8435 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:20:28.420515    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0929 10:20:28.420577    8435 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0929 10:20:28.420594    8435 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0929 10:20:28.420602    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.420661    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.422138    8435 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:20:28.422159    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0929 10:20:28.422240    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.438616    8435 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0929 10:20:28.440397    8435 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0929 10:20:28.440423    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0929 10:20:28.440560    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.452422    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.454952    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:28.467928    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.480500    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.482211    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.484678    8435 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0929 10:20:28.485591    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.486502    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.487837    8435 out.go:179]   - Using image docker.io/busybox:stable
	I0929 10:20:28.489125    8435 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:20:28.489143    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0929 10:20:28.489200    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.489652    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.489946    8435 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 10:20:28.489966    8435 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 10:20:28.490016    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:28.504143    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.506962    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.510776    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.511184    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.515973    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.516819    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.535621    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.536799    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:28.548208    8435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 10:20:28.548306    8435 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 10:20:28.603906    8435 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0929 10:20:28.603931    8435 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0929 10:20:28.612848    8435 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:28.612901    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0929 10:20:28.624144    8435 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0929 10:20:28.624172    8435 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0929 10:20:28.628317    8435 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0929 10:20:28.628335    8435 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0929 10:20:28.647279    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:28.647524    8435 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0929 10:20:28.647541    8435 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0929 10:20:28.648178    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 10:20:28.670396    8435 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0929 10:20:28.670423    8435 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0929 10:20:28.681537    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0929 10:20:28.682919    8435 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:20:28.682938    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0929 10:20:28.695223    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0929 10:20:28.696402    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0929 10:20:28.701327    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0929 10:20:28.703734    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0929 10:20:28.704078    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0929 10:20:28.707098    8435 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0929 10:20:28.707118    8435 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0929 10:20:28.707233    8435 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 10:20:28.707243    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0929 10:20:28.711357    8435 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0929 10:20:28.711378    8435 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0929 10:20:28.713172    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 10:20:28.717974    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0929 10:20:28.730536    8435 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0929 10:20:28.730564    8435 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0929 10:20:28.747460    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0929 10:20:28.802093    8435 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0929 10:20:28.802124    8435 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0929 10:20:28.805925    8435 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0929 10:20:28.805953    8435 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0929 10:20:28.815485    8435 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 10:20:28.815510    8435 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 10:20:28.833328    8435 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0929 10:20:28.833384    8435 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0929 10:20:28.872528    8435 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0929 10:20:28.872552    8435 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0929 10:20:28.894550    8435 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:20:28.894632    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0929 10:20:28.902765    8435 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0929 10:20:28.902786    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0929 10:20:28.904491    8435 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:20:28.904564    8435 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 10:20:28.973957    8435 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0929 10:20:28.973984    8435 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0929 10:20:28.987642    8435 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 10:20:28.989560    8435 node_ready.go:35] waiting up to 6m0s for node "addons-300979" to be "Ready" ...
	I0929 10:20:28.994243    8435 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0929 10:20:28.994319    8435 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0929 10:20:29.006657    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0929 10:20:29.008307    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 10:20:29.056648    8435 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0929 10:20:29.056693    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0929 10:20:29.071953    8435 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:20:29.071975    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0929 10:20:29.123160    8435 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0929 10:20:29.123184    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0929 10:20:29.167360    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:20:29.176783    8435 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:20:29.176904    8435 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0929 10:20:29.264793    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0929 10:20:29.495280    8435 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-300979" context rescaled to 1 replicas
	W0929 10:20:29.593968    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:29.594032    8435 retry.go:31] will retry after 288.23304ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 10:20:29.639991    8435 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0929 10:20:29.819243    8435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.101231205s)
	I0929 10:20:29.819297    8435 addons.go:479] Verifying addon ingress=true in "addons-300979"
	I0929 10:20:29.819401    8435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.071907183s)
	I0929 10:20:29.819584    8435 addons.go:479] Verifying addon metrics-server=true in "addons-300979"
	I0929 10:20:29.819608    8435 addons.go:479] Verifying addon registry=true in "addons-300979"
	I0929 10:20:29.831034    8435 out.go:179] * Verifying ingress addon...
	I0929 10:20:29.831034    8435 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-300979 service yakd-dashboard -n yakd-dashboard
	
	I0929 10:20:29.831072    8435 out.go:179] * Verifying registry addon...
	I0929 10:20:29.832893    8435 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0929 10:20:29.834444    8435 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0929 10:20:29.836496    8435 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0929 10:20:29.836516    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:29.837059    8435 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:20:29.837073    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:29.882607    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:30.336929    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:30.337229    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:30.357366    8435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.189686115s)
	W0929 10:20:30.357426    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:20:30.357448    8435 retry.go:31] will retry after 297.931865ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0929 10:20:30.357579    8435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.092734122s)
	I0929 10:20:30.357614    8435 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-300979"
	I0929 10:20:30.359234    8435 out.go:179] * Verifying csi-hostpath-driver addon...
	I0929 10:20:30.361442    8435 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0929 10:20:30.364138    8435 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:20:30.364160    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:30.529518    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:30.529557    8435 retry.go:31] will retry after 323.178011ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:30.655838    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0929 10:20:30.839444    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:30.839642    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:30.853571    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:30.941133    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:30.992801    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:20:31.337138    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:31.337174    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:31.364215    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:31.836683    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:31.836787    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:31.937730    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:32.335885    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:32.337028    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:32.364305    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:32.836168    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:32.837330    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:32.937269    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:33.117339    8435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.461446842s)
	I0929 10:20:33.117349    8435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.263747758s)
	W0929 10:20:33.117394    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:33.117410    8435 retry.go:31] will retry after 565.09533ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:33.335844    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:33.337229    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:33.364516    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:33.492667    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:20:33.683174    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:33.836860    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:33.837223    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:33.865134    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:34.209431    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:34.209465    8435 retry.go:31] will retry after 996.501236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:34.336131    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:34.337046    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:34.364577    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:34.836004    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:34.837160    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:34.937106    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:35.206329    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:35.336261    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:35.337697    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:35.364440    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:35.730364    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:35.730391    8435 retry.go:31] will retry after 867.598507ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:35.836307    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:35.836662    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:35.937426    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:35.992691    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:20:36.064196    8435 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0929 10:20:36.064265    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:36.081530    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:36.185813    8435 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0929 10:20:36.203070    8435 addons.go:238] Setting addon gcp-auth=true in "addons-300979"
	I0929 10:20:36.203124    8435 host.go:66] Checking if "addons-300979" exists ...
	I0929 10:20:36.203479    8435 cli_runner.go:164] Run: docker container inspect addons-300979 --format={{.State.Status}}
	I0929 10:20:36.220741    8435 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0929 10:20:36.220795    8435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-300979
	I0929 10:20:36.240543    8435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/addons-300979/id_rsa Username:docker}
	I0929 10:20:36.333764    8435 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0929 10:20:36.335823    8435 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0929 10:20:36.336658    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:36.336782    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:36.337042    8435 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0929 10:20:36.337057    8435 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0929 10:20:36.355256    8435 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0929 10:20:36.355276    8435 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0929 10:20:36.364617    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:36.373571    8435 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:20:36.373590    8435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0929 10:20:36.391319    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0929 10:20:36.599094    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:36.694010    8435 addons.go:479] Verifying addon gcp-auth=true in "addons-300979"
	I0929 10:20:36.696096    8435 out.go:179] * Verifying gcp-auth addon...
	I0929 10:20:36.697770    8435 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0929 10:20:36.699830    8435 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0929 10:20:36.699851    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:36.836601    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:36.836896    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:36.864865    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:37.134247    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:37.134282    8435 retry.go:31] will retry after 1.276464364s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:37.200697    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:37.336402    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:37.336885    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:37.364474    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:37.700657    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:37.836267    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:37.836809    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:37.864143    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:38.201253    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:38.335763    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:38.337266    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:38.364588    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:38.411928    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 10:20:38.492636    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:20:38.701754    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:38.836671    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:38.836915    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:38.864540    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:38.932037    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:38.932066    8435 retry.go:31] will retry after 2.320993857s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:39.201144    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:39.335750    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:39.337165    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:39.364534    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:39.701264    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:39.835457    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:39.836934    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:39.864309    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:40.201205    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:40.335731    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:40.337337    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:40.364416    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:40.492846    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:20:40.700370    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:40.835942    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:40.836594    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:40.863739    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:41.200548    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:41.253701    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:41.335457    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:41.337085    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:41.364451    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:41.700757    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 10:20:41.772142    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:41.772169    8435 retry.go:31] will retry after 5.170368274s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:41.835701    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:41.837134    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:41.864382    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:42.200706    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:42.336268    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:42.337134    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:42.364308    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:42.700394    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:42.836180    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:42.836604    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:42.863856    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:42.992199    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:20:43.200737    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:43.336893    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:43.336970    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:43.364204    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:43.701604    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:43.836330    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:43.837100    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:43.864225    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:44.200260    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:44.335941    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:44.337523    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:44.363828    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:44.700838    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:44.836496    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:44.837084    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:44.864290    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:44.992816    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:20:45.200329    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:45.335983    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:45.336552    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:45.363726    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:45.701175    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:45.835846    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:45.837412    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:45.864805    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:46.201052    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:46.335525    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:46.337398    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:46.364757    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:46.700515    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:46.836330    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:46.836930    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:46.863978    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:46.943209    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 10:20:46.994453    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:20:47.201184    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:47.336006    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:47.336614    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:47.364619    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:47.466806    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:47.466836    8435 retry.go:31] will retry after 3.701431495s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:47.700965    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:47.836676    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:47.837130    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:47.864309    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:48.200185    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:48.335747    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:48.337171    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:48.364277    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:48.700097    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:48.835735    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:48.837427    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:48.865044    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:49.201428    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:49.335956    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:49.336674    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:49.363695    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:49.492325    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:20:49.701124    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:49.835547    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:49.837077    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:49.864261    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:50.200197    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:50.335727    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:50.337298    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:50.364617    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:50.700420    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:50.836234    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:50.836577    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:50.863539    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:51.169090    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:20:51.201171    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:51.335911    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:51.337472    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:51.365202    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:51.492385    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:20:51.701225    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0929 10:20:51.703175    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:51.703205    8435 retry.go:31] will retry after 11.672318792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:20:51.835750    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:51.837285    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:51.864421    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:52.200767    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:52.336314    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:52.336982    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:52.364059    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:52.700910    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:52.835419    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:52.836917    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:52.864177    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:53.201063    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:53.335830    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:53.337187    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:53.364452    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:53.492817    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:20:53.700620    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:53.836518    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:53.837080    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:53.864270    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:54.200380    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:54.336013    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:54.336711    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:54.363995    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:54.700809    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:54.836582    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:54.837133    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:54.864298    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:55.200340    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:55.335734    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:55.337485    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:55.365154    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:55.700795    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:55.836418    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:55.836817    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:55.863986    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:55.992390    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:20:56.200917    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:56.336517    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:56.337067    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:56.364052    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:56.700935    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:56.835427    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:56.836949    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:56.864023    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:57.201241    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:57.335654    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:57.336578    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:57.363839    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:57.700593    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:57.836141    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:57.836916    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:57.864301    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:20:57.993082    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:20:58.200695    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:58.336587    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:58.336781    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:58.364207    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:58.700787    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:58.836327    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:58.836958    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:58.864189    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:59.200154    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:59.335917    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:59.337300    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:59.364485    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:20:59.700628    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:20:59.836287    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:20:59.836779    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:20:59.864173    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:00.200441    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:00.336167    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:00.336627    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:00.363803    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:00.492096    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:21:00.700619    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:00.836462    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:00.836847    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:00.864199    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:01.200472    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:01.336083    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:01.336910    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:01.364185    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:01.701971    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:01.835462    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:01.836999    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:01.864329    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:02.201438    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:02.336191    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:02.336913    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:02.363930    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:02.700562    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:02.836291    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:02.836808    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:02.864145    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:02.992571    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:21:03.201342    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:03.335739    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:03.337282    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:03.364669    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:03.375672    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:03.701038    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:03.836262    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:03.837774    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:03.863739    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:03.893029    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:03.893056    8435 retry.go:31] will retry after 8.400445806s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:04.201065    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:04.335637    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:04.337116    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:04.364488    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:04.700172    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:04.837373    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:04.837463    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:04.863832    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:05.201440    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:05.336034    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:05.336635    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:05.363948    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:05.492430    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:21:05.700801    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:05.836409    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:05.837026    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:05.863958    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:06.200301    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:06.335844    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:06.337338    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:06.364380    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:06.700023    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:06.835443    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:06.836899    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:06.864252    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:07.200143    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:07.335713    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:07.337328    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:07.364552    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:07.492940    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:21:07.700118    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:07.835849    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:07.837339    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:07.864515    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:08.200583    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:08.335969    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:08.336662    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:08.363698    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:08.700560    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:08.836264    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:08.836923    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:08.864140    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:09.200279    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:09.335989    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:09.337367    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:09.364595    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:09.700351    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:09.836212    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:09.836530    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:09.863851    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:09.992192    8435 node_ready.go:57] node "addons-300979" has "Ready":"False" status (will retry)
	I0929 10:21:10.200764    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:10.336379    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:10.337119    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:10.364716    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:10.700782    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:10.836779    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:10.837229    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:10.864623    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:11.200575    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:11.336083    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:11.336854    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:11.363934    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:11.493758    8435 node_ready.go:49] node "addons-300979" is "Ready"
	I0929 10:21:11.493796    8435 node_ready.go:38] duration metric: took 42.50421296s for node "addons-300979" to be "Ready" ...
	I0929 10:21:11.493815    8435 api_server.go:52] waiting for apiserver process to appear ...
	I0929 10:21:11.493892    8435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:21:11.515085    8435 api_server.go:72] duration metric: took 43.205303057s to wait for apiserver process to appear ...
	I0929 10:21:11.515111    8435 api_server.go:88] waiting for apiserver healthz status ...
	I0929 10:21:11.515133    8435 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 10:21:11.521031    8435 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 10:21:11.524610    8435 api_server.go:141] control plane version: v1.34.0
	I0929 10:21:11.524772    8435 api_server.go:131] duration metric: took 9.525907ms to wait for apiserver health ...
	I0929 10:21:11.524793    8435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 10:21:11.532269    8435 system_pods.go:59] 20 kube-system pods found
	I0929 10:21:11.532364    8435 system_pods.go:61] "amd-gpu-device-plugin-jmnzb" [63390556-2b47-43e7-8bd7-11e2b91c7cc7] Pending
	I0929 10:21:11.532396    8435 system_pods.go:61] "coredns-66bc5c9577-bz57x" [22e06151-8707-4715-9ad2-f3035ae9069e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:21:11.532435    8435 system_pods.go:61] "csi-hostpath-attacher-0" [7ca21e5f-7a98-4905-8d97-af45ed2e6220] Pending
	I0929 10:21:11.532463    8435 system_pods.go:61] "csi-hostpath-resizer-0" [10be6a6d-c350-42db-b30e-15564f4dfa52] Pending
	I0929 10:21:11.532478    8435 system_pods.go:61] "csi-hostpathplugin-kppth" [56232a08-4ac4-4518-81d3-8c12825ff0b7] Pending
	I0929 10:21:11.532484    8435 system_pods.go:61] "etcd-addons-300979" [a29094e9-56e8-40c8-8dd2-96f632d97cf5] Running
	I0929 10:21:11.532489    8435 system_pods.go:61] "kindnet-tz5gq" [4f0c6156-5fdf-46ed-8d06-64b9e650d5a1] Running
	I0929 10:21:11.532493    8435 system_pods.go:61] "kube-apiserver-addons-300979" [32448f12-e984-4c0e-acec-165b4c579fc6] Running
	I0929 10:21:11.532498    8435 system_pods.go:61] "kube-controller-manager-addons-300979" [1b22a142-f97e-45dc-9481-5c21e063af49] Running
	I0929 10:21:11.532503    8435 system_pods.go:61] "kube-ingress-dns-minikube" [7fc02c16-a83d-4842-9731-ec1e458005e9] Pending
	I0929 10:21:11.532507    8435 system_pods.go:61] "kube-proxy-82n6s" [80cd22c8-aa2d-4041-9775-5c95c7edf6d9] Running
	I0929 10:21:11.532512    8435 system_pods.go:61] "kube-scheduler-addons-300979" [1c78cffd-9516-4ba9-962d-24638b610550] Running
	I0929 10:21:11.532517    8435 system_pods.go:61] "metrics-server-85b7d694d7-v8zjg" [6d637f4a-557f-4d88-9ab2-01503ec4f7a3] Pending
	I0929 10:21:11.532521    8435 system_pods.go:61] "nvidia-device-plugin-daemonset-vsq7c" [f7dcb2e3-2806-4519-a447-fd8f941f56c5] Pending
	I0929 10:21:11.532568    8435 system_pods.go:61] "registry-66898fdd98-tzw5c" [6783e8bc-6d03-4f51-a028-5692d87c068a] Pending
	I0929 10:21:11.532584    8435 system_pods.go:61] "registry-creds-764b6fb674-gfndd" [90be012e-2f41-475e-91d6-95aed4f047bd] Pending
	I0929 10:21:11.532599    8435 system_pods.go:61] "registry-proxy-mc8zr" [d9145d3f-70fc-493e-a443-621a531cf630] Pending
	I0929 10:21:11.532613    8435 system_pods.go:61] "snapshot-controller-7d9fbc56b8-dvj8z" [553e8e3b-b569-42e4-92f4-0d3a7520c107] Pending
	I0929 10:21:11.532642    8435 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fvhdr" [0be134f7-e400-41a6-8449-175fc7c1ef1d] Pending
	I0929 10:21:11.532671    8435 system_pods.go:61] "storage-provisioner" [f5b0e3b2-5ce5-40b8-8882-12acf83148e5] Pending
	I0929 10:21:11.532680    8435 system_pods.go:74] duration metric: took 7.878822ms to wait for pod list to return data ...
	I0929 10:21:11.532689    8435 default_sa.go:34] waiting for default service account to be created ...
	I0929 10:21:11.539931    8435 default_sa.go:45] found service account: "default"
	I0929 10:21:11.539960    8435 default_sa.go:55] duration metric: took 7.263971ms for default service account to be created ...
	I0929 10:21:11.539973    8435 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 10:21:11.547360    8435 system_pods.go:86] 20 kube-system pods found
	I0929 10:21:11.547395    8435 system_pods.go:89] "amd-gpu-device-plugin-jmnzb" [63390556-2b47-43e7-8bd7-11e2b91c7cc7] Pending
	I0929 10:21:11.547406    8435 system_pods.go:89] "coredns-66bc5c9577-bz57x" [22e06151-8707-4715-9ad2-f3035ae9069e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:21:11.547412    8435 system_pods.go:89] "csi-hostpath-attacher-0" [7ca21e5f-7a98-4905-8d97-af45ed2e6220] Pending
	I0929 10:21:11.547420    8435 system_pods.go:89] "csi-hostpath-resizer-0" [10be6a6d-c350-42db-b30e-15564f4dfa52] Pending
	I0929 10:21:11.547425    8435 system_pods.go:89] "csi-hostpathplugin-kppth" [56232a08-4ac4-4518-81d3-8c12825ff0b7] Pending
	I0929 10:21:11.547431    8435 system_pods.go:89] "etcd-addons-300979" [a29094e9-56e8-40c8-8dd2-96f632d97cf5] Running
	I0929 10:21:11.547436    8435 system_pods.go:89] "kindnet-tz5gq" [4f0c6156-5fdf-46ed-8d06-64b9e650d5a1] Running
	I0929 10:21:11.547442    8435 system_pods.go:89] "kube-apiserver-addons-300979" [32448f12-e984-4c0e-acec-165b4c579fc6] Running
	I0929 10:21:11.547451    8435 system_pods.go:89] "kube-controller-manager-addons-300979" [1b22a142-f97e-45dc-9481-5c21e063af49] Running
	I0929 10:21:11.547460    8435 system_pods.go:89] "kube-ingress-dns-minikube" [7fc02c16-a83d-4842-9731-ec1e458005e9] Pending
	I0929 10:21:11.547466    8435 system_pods.go:89] "kube-proxy-82n6s" [80cd22c8-aa2d-4041-9775-5c95c7edf6d9] Running
	I0929 10:21:11.547474    8435 system_pods.go:89] "kube-scheduler-addons-300979" [1c78cffd-9516-4ba9-962d-24638b610550] Running
	I0929 10:21:11.547479    8435 system_pods.go:89] "metrics-server-85b7d694d7-v8zjg" [6d637f4a-557f-4d88-9ab2-01503ec4f7a3] Pending
	I0929 10:21:11.547488    8435 system_pods.go:89] "nvidia-device-plugin-daemonset-vsq7c" [f7dcb2e3-2806-4519-a447-fd8f941f56c5] Pending
	I0929 10:21:11.547493    8435 system_pods.go:89] "registry-66898fdd98-tzw5c" [6783e8bc-6d03-4f51-a028-5692d87c068a] Pending
	I0929 10:21:11.547501    8435 system_pods.go:89] "registry-creds-764b6fb674-gfndd" [90be012e-2f41-475e-91d6-95aed4f047bd] Pending
	I0929 10:21:11.547506    8435 system_pods.go:89] "registry-proxy-mc8zr" [d9145d3f-70fc-493e-a443-621a531cf630] Pending
	I0929 10:21:11.547510    8435 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dvj8z" [553e8e3b-b569-42e4-92f4-0d3a7520c107] Pending
	I0929 10:21:11.547532    8435 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fvhdr" [0be134f7-e400-41a6-8449-175fc7c1ef1d] Pending
	I0929 10:21:11.547536    8435 system_pods.go:89] "storage-provisioner" [f5b0e3b2-5ce5-40b8-8882-12acf83148e5] Pending
	I0929 10:21:11.547554    8435 retry.go:31] will retry after 191.759759ms: missing components: kube-dns
	I0929 10:21:11.703071    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:11.744002    8435 system_pods.go:86] 20 kube-system pods found
	I0929 10:21:11.744040    8435 system_pods.go:89] "amd-gpu-device-plugin-jmnzb" [63390556-2b47-43e7-8bd7-11e2b91c7cc7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:21:11.744050    8435 system_pods.go:89] "coredns-66bc5c9577-bz57x" [22e06151-8707-4715-9ad2-f3035ae9069e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:21:11.744060    8435 system_pods.go:89] "csi-hostpath-attacher-0" [7ca21e5f-7a98-4905-8d97-af45ed2e6220] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:21:11.744069    8435 system_pods.go:89] "csi-hostpath-resizer-0" [10be6a6d-c350-42db-b30e-15564f4dfa52] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:21:11.744080    8435 system_pods.go:89] "csi-hostpathplugin-kppth" [56232a08-4ac4-4518-81d3-8c12825ff0b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:21:11.744091    8435 system_pods.go:89] "etcd-addons-300979" [a29094e9-56e8-40c8-8dd2-96f632d97cf5] Running
	I0929 10:21:11.744098    8435 system_pods.go:89] "kindnet-tz5gq" [4f0c6156-5fdf-46ed-8d06-64b9e650d5a1] Running
	I0929 10:21:11.744106    8435 system_pods.go:89] "kube-apiserver-addons-300979" [32448f12-e984-4c0e-acec-165b4c579fc6] Running
	I0929 10:21:11.744115    8435 system_pods.go:89] "kube-controller-manager-addons-300979" [1b22a142-f97e-45dc-9481-5c21e063af49] Running
	I0929 10:21:11.744122    8435 system_pods.go:89] "kube-ingress-dns-minikube" [7fc02c16-a83d-4842-9731-ec1e458005e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:21:11.744131    8435 system_pods.go:89] "kube-proxy-82n6s" [80cd22c8-aa2d-4041-9775-5c95c7edf6d9] Running
	I0929 10:21:11.744136    8435 system_pods.go:89] "kube-scheduler-addons-300979" [1c78cffd-9516-4ba9-962d-24638b610550] Running
	I0929 10:21:11.744145    8435 system_pods.go:89] "metrics-server-85b7d694d7-v8zjg" [6d637f4a-557f-4d88-9ab2-01503ec4f7a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:21:11.744157    8435 system_pods.go:89] "nvidia-device-plugin-daemonset-vsq7c" [f7dcb2e3-2806-4519-a447-fd8f941f56c5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:21:11.744164    8435 system_pods.go:89] "registry-66898fdd98-tzw5c" [6783e8bc-6d03-4f51-a028-5692d87c068a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:21:11.744175    8435 system_pods.go:89] "registry-creds-764b6fb674-gfndd" [90be012e-2f41-475e-91d6-95aed4f047bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:21:11.744184    8435 system_pods.go:89] "registry-proxy-mc8zr" [d9145d3f-70fc-493e-a443-621a531cf630] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:21:11.744194    8435 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dvj8z" [553e8e3b-b569-42e4-92f4-0d3a7520c107] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:11.744207    8435 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fvhdr" [0be134f7-e400-41a6-8449-175fc7c1ef1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:11.744218    8435 system_pods.go:89] "storage-provisioner" [f5b0e3b2-5ce5-40b8-8882-12acf83148e5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:21:11.744237    8435 retry.go:31] will retry after 375.964484ms: missing components: kube-dns
	I0929 10:21:11.836391    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:11.837747    8435 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0929 10:21:11.837772    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:11.937085    8435 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0929 10:21:11.937111    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:12.125575    8435 system_pods.go:86] 20 kube-system pods found
	I0929 10:21:12.125612    8435 system_pods.go:89] "amd-gpu-device-plugin-jmnzb" [63390556-2b47-43e7-8bd7-11e2b91c7cc7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:21:12.125623    8435 system_pods.go:89] "coredns-66bc5c9577-bz57x" [22e06151-8707-4715-9ad2-f3035ae9069e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:21:12.125640    8435 system_pods.go:89] "csi-hostpath-attacher-0" [7ca21e5f-7a98-4905-8d97-af45ed2e6220] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:21:12.125650    8435 system_pods.go:89] "csi-hostpath-resizer-0" [10be6a6d-c350-42db-b30e-15564f4dfa52] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:21:12.125658    8435 system_pods.go:89] "csi-hostpathplugin-kppth" [56232a08-4ac4-4518-81d3-8c12825ff0b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:21:12.125664    8435 system_pods.go:89] "etcd-addons-300979" [a29094e9-56e8-40c8-8dd2-96f632d97cf5] Running
	I0929 10:21:12.125673    8435 system_pods.go:89] "kindnet-tz5gq" [4f0c6156-5fdf-46ed-8d06-64b9e650d5a1] Running
	I0929 10:21:12.125685    8435 system_pods.go:89] "kube-apiserver-addons-300979" [32448f12-e984-4c0e-acec-165b4c579fc6] Running
	I0929 10:21:12.125690    8435 system_pods.go:89] "kube-controller-manager-addons-300979" [1b22a142-f97e-45dc-9481-5c21e063af49] Running
	I0929 10:21:12.125699    8435 system_pods.go:89] "kube-ingress-dns-minikube" [7fc02c16-a83d-4842-9731-ec1e458005e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:21:12.125704    8435 system_pods.go:89] "kube-proxy-82n6s" [80cd22c8-aa2d-4041-9775-5c95c7edf6d9] Running
	I0929 10:21:12.125710    8435 system_pods.go:89] "kube-scheduler-addons-300979" [1c78cffd-9516-4ba9-962d-24638b610550] Running
	I0929 10:21:12.125718    8435 system_pods.go:89] "metrics-server-85b7d694d7-v8zjg" [6d637f4a-557f-4d88-9ab2-01503ec4f7a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:21:12.125727    8435 system_pods.go:89] "nvidia-device-plugin-daemonset-vsq7c" [f7dcb2e3-2806-4519-a447-fd8f941f56c5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:21:12.125738    8435 system_pods.go:89] "registry-66898fdd98-tzw5c" [6783e8bc-6d03-4f51-a028-5692d87c068a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:21:12.125749    8435 system_pods.go:89] "registry-creds-764b6fb674-gfndd" [90be012e-2f41-475e-91d6-95aed4f047bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:21:12.125757    8435 system_pods.go:89] "registry-proxy-mc8zr" [d9145d3f-70fc-493e-a443-621a531cf630] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:21:12.125769    8435 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dvj8z" [553e8e3b-b569-42e4-92f4-0d3a7520c107] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:12.125789    8435 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fvhdr" [0be134f7-e400-41a6-8449-175fc7c1ef1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:12.125795    8435 system_pods.go:89] "storage-provisioner" [f5b0e3b2-5ce5-40b8-8882-12acf83148e5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:21:12.125813    8435 retry.go:31] will retry after 476.508145ms: missing components: kube-dns
	I0929 10:21:12.223921    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:12.293995    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:12.336442    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:12.336955    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:12.365064    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:12.607545    8435 system_pods.go:86] 20 kube-system pods found
	I0929 10:21:12.607586    8435 system_pods.go:89] "amd-gpu-device-plugin-jmnzb" [63390556-2b47-43e7-8bd7-11e2b91c7cc7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:21:12.607596    8435 system_pods.go:89] "coredns-66bc5c9577-bz57x" [22e06151-8707-4715-9ad2-f3035ae9069e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 10:21:12.607606    8435 system_pods.go:89] "csi-hostpath-attacher-0" [7ca21e5f-7a98-4905-8d97-af45ed2e6220] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:21:12.607614    8435 system_pods.go:89] "csi-hostpath-resizer-0" [10be6a6d-c350-42db-b30e-15564f4dfa52] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:21:12.607621    8435 system_pods.go:89] "csi-hostpathplugin-kppth" [56232a08-4ac4-4518-81d3-8c12825ff0b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:21:12.607627    8435 system_pods.go:89] "etcd-addons-300979" [a29094e9-56e8-40c8-8dd2-96f632d97cf5] Running
	I0929 10:21:12.607634    8435 system_pods.go:89] "kindnet-tz5gq" [4f0c6156-5fdf-46ed-8d06-64b9e650d5a1] Running
	I0929 10:21:12.607640    8435 system_pods.go:89] "kube-apiserver-addons-300979" [32448f12-e984-4c0e-acec-165b4c579fc6] Running
	I0929 10:21:12.607645    8435 system_pods.go:89] "kube-controller-manager-addons-300979" [1b22a142-f97e-45dc-9481-5c21e063af49] Running
	I0929 10:21:12.607654    8435 system_pods.go:89] "kube-ingress-dns-minikube" [7fc02c16-a83d-4842-9731-ec1e458005e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:21:12.607658    8435 system_pods.go:89] "kube-proxy-82n6s" [80cd22c8-aa2d-4041-9775-5c95c7edf6d9] Running
	I0929 10:21:12.607665    8435 system_pods.go:89] "kube-scheduler-addons-300979" [1c78cffd-9516-4ba9-962d-24638b610550] Running
	I0929 10:21:12.607672    8435 system_pods.go:89] "metrics-server-85b7d694d7-v8zjg" [6d637f4a-557f-4d88-9ab2-01503ec4f7a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:21:12.607681    8435 system_pods.go:89] "nvidia-device-plugin-daemonset-vsq7c" [f7dcb2e3-2806-4519-a447-fd8f941f56c5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:21:12.607688    8435 system_pods.go:89] "registry-66898fdd98-tzw5c" [6783e8bc-6d03-4f51-a028-5692d87c068a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:21:12.607696    8435 system_pods.go:89] "registry-creds-764b6fb674-gfndd" [90be012e-2f41-475e-91d6-95aed4f047bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:21:12.607703    8435 system_pods.go:89] "registry-proxy-mc8zr" [d9145d3f-70fc-493e-a443-621a531cf630] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:21:12.607715    8435 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dvj8z" [553e8e3b-b569-42e4-92f4-0d3a7520c107] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:12.607725    8435 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fvhdr" [0be134f7-e400-41a6-8449-175fc7c1ef1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:12.607733    8435 system_pods.go:89] "storage-provisioner" [f5b0e3b2-5ce5-40b8-8882-12acf83148e5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 10:21:12.607749    8435 retry.go:31] will retry after 505.805196ms: missing components: kube-dns
	I0929 10:21:12.702320    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:12.836858    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:12.836906    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:12.865091    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:13.036354    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:13.036395    8435 retry.go:31] will retry after 28.339122867s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:13.117944    8435 system_pods.go:86] 20 kube-system pods found
	I0929 10:21:13.117977    8435 system_pods.go:89] "amd-gpu-device-plugin-jmnzb" [63390556-2b47-43e7-8bd7-11e2b91c7cc7] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0929 10:21:13.117984    8435 system_pods.go:89] "coredns-66bc5c9577-bz57x" [22e06151-8707-4715-9ad2-f3035ae9069e] Running
	I0929 10:21:13.117994    8435 system_pods.go:89] "csi-hostpath-attacher-0" [7ca21e5f-7a98-4905-8d97-af45ed2e6220] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0929 10:21:13.118001    8435 system_pods.go:89] "csi-hostpath-resizer-0" [10be6a6d-c350-42db-b30e-15564f4dfa52] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0929 10:21:13.118011    8435 system_pods.go:89] "csi-hostpathplugin-kppth" [56232a08-4ac4-4518-81d3-8c12825ff0b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0929 10:21:13.118017    8435 system_pods.go:89] "etcd-addons-300979" [a29094e9-56e8-40c8-8dd2-96f632d97cf5] Running
	I0929 10:21:13.118023    8435 system_pods.go:89] "kindnet-tz5gq" [4f0c6156-5fdf-46ed-8d06-64b9e650d5a1] Running
	I0929 10:21:13.118032    8435 system_pods.go:89] "kube-apiserver-addons-300979" [32448f12-e984-4c0e-acec-165b4c579fc6] Running
	I0929 10:21:13.118041    8435 system_pods.go:89] "kube-controller-manager-addons-300979" [1b22a142-f97e-45dc-9481-5c21e063af49] Running
	I0929 10:21:13.118051    8435 system_pods.go:89] "kube-ingress-dns-minikube" [7fc02c16-a83d-4842-9731-ec1e458005e9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0929 10:21:13.118057    8435 system_pods.go:89] "kube-proxy-82n6s" [80cd22c8-aa2d-4041-9775-5c95c7edf6d9] Running
	I0929 10:21:13.118061    8435 system_pods.go:89] "kube-scheduler-addons-300979" [1c78cffd-9516-4ba9-962d-24638b610550] Running
	I0929 10:21:13.118067    8435 system_pods.go:89] "metrics-server-85b7d694d7-v8zjg" [6d637f4a-557f-4d88-9ab2-01503ec4f7a3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 10:21:13.118080    8435 system_pods.go:89] "nvidia-device-plugin-daemonset-vsq7c" [f7dcb2e3-2806-4519-a447-fd8f941f56c5] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0929 10:21:13.118092    8435 system_pods.go:89] "registry-66898fdd98-tzw5c" [6783e8bc-6d03-4f51-a028-5692d87c068a] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0929 10:21:13.118107    8435 system_pods.go:89] "registry-creds-764b6fb674-gfndd" [90be012e-2f41-475e-91d6-95aed4f047bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0929 10:21:13.118115    8435 system_pods.go:89] "registry-proxy-mc8zr" [d9145d3f-70fc-493e-a443-621a531cf630] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0929 10:21:13.118122    8435 system_pods.go:89] "snapshot-controller-7d9fbc56b8-dvj8z" [553e8e3b-b569-42e4-92f4-0d3a7520c107] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:13.118134    8435 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fvhdr" [0be134f7-e400-41a6-8449-175fc7c1ef1d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0929 10:21:13.118143    8435 system_pods.go:89] "storage-provisioner" [f5b0e3b2-5ce5-40b8-8882-12acf83148e5] Running
	I0929 10:21:13.118154    8435 system_pods.go:126] duration metric: took 1.578174513s to wait for k8s-apps to be running ...
	I0929 10:21:13.118164    8435 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 10:21:13.118218    8435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:21:13.130551    8435 system_svc.go:56] duration metric: took 12.376395ms WaitForService to wait for kubelet
	I0929 10:21:13.130581    8435 kubeadm.go:578] duration metric: took 44.820805461s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 10:21:13.130609    8435 node_conditions.go:102] verifying NodePressure condition ...
	I0929 10:21:13.133162    8435 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 10:21:13.133193    8435 node_conditions.go:123] node cpu capacity is 8
	I0929 10:21:13.133209    8435 node_conditions.go:105] duration metric: took 2.59398ms to run NodePressure ...
	I0929 10:21:13.133221    8435 start.go:241] waiting for startup goroutines ...
	I0929 10:21:13.200852    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:13.337134    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:13.337141    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:13.364616    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:13.701410    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:13.836463    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:13.836843    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:13.864434    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:14.201645    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:14.336590    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:14.336967    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:14.364713    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:14.700578    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:14.836496    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:14.836730    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:14.864068    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:15.200622    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:15.336530    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:15.336865    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:15.364338    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:15.700812    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:15.836450    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:15.837032    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:15.864552    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:16.201751    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:16.336887    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:16.337187    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:16.364979    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:16.700410    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:16.836458    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:16.836660    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:16.864262    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:17.200844    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:17.336837    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:17.337598    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:17.364404    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:17.700976    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:17.837062    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:17.837204    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:17.864676    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:18.201572    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:18.336648    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:18.337115    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:18.364517    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:18.700855    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:18.837407    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:18.837499    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:18.865413    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:19.201315    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:19.338264    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:19.338585    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:19.364623    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:19.701338    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:19.836473    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:19.836485    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:19.865108    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:20.200652    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:20.336521    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:20.337096    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:20.364836    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:20.701020    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:20.837492    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:20.837634    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:20.864997    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:21.200188    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:21.335977    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:21.337559    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:21.364302    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:21.700905    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:21.836986    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:21.837311    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:21.865306    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:22.201632    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:22.336837    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:22.337180    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:22.365565    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:22.701616    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:22.836774    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:22.837311    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:22.865669    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:23.201310    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:23.335829    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:23.337652    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:23.364447    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:23.701940    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:23.836137    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:23.837582    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:23.864244    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:24.201543    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:24.336453    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:24.337002    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:24.365077    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:24.700772    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:24.836521    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:24.837140    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:24.864853    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:25.200385    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:25.336165    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:25.336749    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:25.364456    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:25.701133    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:25.835762    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:25.837280    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:25.864532    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:26.201462    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:26.336711    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:26.336920    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:26.364933    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:26.701230    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:26.835997    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:26.836694    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:26.864759    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:27.201408    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:27.357682    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:27.357706    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:27.397793    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:27.701653    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:27.848120    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:27.915329    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:27.915731    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:28.201020    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:28.335444    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:28.337078    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:28.364749    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:28.700635    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:28.836406    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:28.836790    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:28.866630    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:29.201708    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:29.337289    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:29.337306    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:29.365269    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:29.700980    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:29.836457    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:29.837174    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:29.865150    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:30.200633    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:30.336718    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:30.336857    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:30.364341    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:30.701215    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:30.836039    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:30.837219    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:30.864592    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:31.201138    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:31.336200    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:31.337469    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:31.365123    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:31.700651    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:31.836346    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:31.837080    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:31.864830    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:32.200599    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:32.336565    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:32.336933    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:32.364463    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:32.700966    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:32.836551    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:32.837014    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:32.864963    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:33.200628    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:33.336893    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:33.336894    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:33.364392    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:33.701238    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:33.837013    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:33.837287    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:33.864672    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:34.202183    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:34.336995    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:34.337031    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:34.365064    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:34.704746    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:34.841041    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:34.841579    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:34.865114    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:35.201005    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:35.336856    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:35.337417    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:35.365150    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:35.700959    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:35.836965    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:35.837342    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:35.865046    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:36.200668    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:36.337052    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:36.337045    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:36.365062    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:36.700674    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:36.836480    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:36.837358    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:36.865260    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:37.200856    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:37.336777    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:37.337320    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:37.365394    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:37.700750    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:37.836736    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:37.837007    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:37.865222    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:38.200995    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:38.336330    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:38.337412    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:38.365002    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:38.701149    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:38.836348    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:38.836964    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:38.865102    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:39.200977    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:39.336014    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:39.337262    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:39.365109    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:39.700654    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:39.836528    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:39.836866    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:39.864649    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:40.201281    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:40.336317    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:40.337674    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:40.364358    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:40.700932    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:40.837106    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:40.837515    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:40.864724    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:41.201303    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:41.336181    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:41.336537    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:41.375825    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0929 10:21:41.438419    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:41.701181    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:41.837194    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:41.837827    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:41.864226    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0929 10:21:41.980618    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:41.980655    8435 retry.go:31] will retry after 23.246204115s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 10:21:42.202308    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:42.336789    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:42.336819    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:42.389569    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:42.761678    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:42.836319    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:42.836926    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:42.864517    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:43.200923    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:43.335488    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:43.337020    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:43.364686    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:43.700473    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:43.836386    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:43.837070    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:43.864492    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:44.201432    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:44.336670    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:44.336719    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:44.364155    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:44.701675    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:44.836475    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:44.837176    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:44.865259    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:45.200977    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:45.336760    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:45.337238    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:45.365318    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:45.701384    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:45.836215    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:45.836741    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0929 10:21:45.864900    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:46.200825    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:46.337407    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:46.337419    8435 kapi.go:107] duration metric: took 1m16.502975948s to wait for kubernetes.io/minikube-addons=registry ...
	I0929 10:21:46.365441    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:46.701286    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:46.836154    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:46.864629    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:47.201403    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:47.336143    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:47.364900    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:47.700699    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:47.836611    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:47.864573    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:48.202010    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:48.336178    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:48.365098    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:48.700845    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:48.835910    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:48.864676    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:49.201227    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:49.335970    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:49.364670    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:49.700596    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:49.836592    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:49.865084    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:50.202735    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:50.340794    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:50.365239    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:50.701438    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:50.836624    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:50.865340    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:51.201062    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:51.335923    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:51.364903    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:51.700246    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:51.836332    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:51.865260    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:52.201240    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:52.336203    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:52.365323    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:52.701120    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:52.836252    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:52.864898    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:53.210604    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:53.336383    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:53.365219    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:53.701125    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:53.835959    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:53.865292    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:54.200303    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:54.336344    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:54.437752    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:54.700228    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:54.835899    8435 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0929 10:21:54.864342    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:55.201753    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:55.336896    8435 kapi.go:107] duration metric: took 1m25.503977961s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0929 10:21:55.364576    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:55.701221    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:55.865016    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:56.200657    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:56.365063    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:56.731019    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0929 10:21:56.864884    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:57.200277    8435 kapi.go:107] duration metric: took 1m20.502501336s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0929 10:21:57.201746    8435 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-300979 cluster.
	I0929 10:21:57.202980    8435 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0929 10:21:57.204566    8435 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0929 10:21:57.365037    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:57.864748    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:58.365523    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:58.865464    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:59.364735    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:21:59.864870    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:00.365052    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:00.865201    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:01.364754    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:01.864855    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:02.365139    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:02.865560    8435 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0929 10:22:03.364964    8435 kapi.go:107] duration metric: took 1m33.003522218s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0929 10:22:05.227809    8435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0929 10:22:05.746589    8435 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 10:22:05.746697    8435 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0929 10:22:05.748631    8435 out.go:179] * Enabled addons: storage-provisioner, cloud-spanner, amd-gpu-device-plugin, ingress-dns, registry-creds, nvidia-device-plugin, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0929 10:22:05.749809    8435 addons.go:514] duration metric: took 1m37.439998336s for enable addons: enabled=[storage-provisioner cloud-spanner amd-gpu-device-plugin ingress-dns registry-creds nvidia-device-plugin storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0929 10:22:05.749848    8435 start.go:246] waiting for cluster config update ...
	I0929 10:22:05.749863    8435 start.go:255] writing updated cluster config ...
	I0929 10:22:05.750102    8435 ssh_runner.go:195] Run: rm -f paused
	I0929 10:22:05.753552    8435 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:22:05.756763    8435 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bz57x" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:05.760611    8435 pod_ready.go:94] pod "coredns-66bc5c9577-bz57x" is "Ready"
	I0929 10:22:05.760632    8435 pod_ready.go:86] duration metric: took 3.852216ms for pod "coredns-66bc5c9577-bz57x" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:05.762413    8435 pod_ready.go:83] waiting for pod "etcd-addons-300979" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:05.765667    8435 pod_ready.go:94] pod "etcd-addons-300979" is "Ready"
	I0929 10:22:05.765689    8435 pod_ready.go:86] duration metric: took 3.257405ms for pod "etcd-addons-300979" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:05.767203    8435 pod_ready.go:83] waiting for pod "kube-apiserver-addons-300979" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:05.770298    8435 pod_ready.go:94] pod "kube-apiserver-addons-300979" is "Ready"
	I0929 10:22:05.770320    8435 pod_ready.go:86] duration metric: took 3.099439ms for pod "kube-apiserver-addons-300979" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:05.771815    8435 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-300979" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:06.156977    8435 pod_ready.go:94] pod "kube-controller-manager-addons-300979" is "Ready"
	I0929 10:22:06.157002    8435 pod_ready.go:86] duration metric: took 385.168667ms for pod "kube-controller-manager-addons-300979" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:06.357271    8435 pod_ready.go:83] waiting for pod "kube-proxy-82n6s" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:06.756766    8435 pod_ready.go:94] pod "kube-proxy-82n6s" is "Ready"
	I0929 10:22:06.756795    8435 pod_ready.go:86] duration metric: took 399.499961ms for pod "kube-proxy-82n6s" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:06.958039    8435 pod_ready.go:83] waiting for pod "kube-scheduler-addons-300979" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:07.356687    8435 pod_ready.go:94] pod "kube-scheduler-addons-300979" is "Ready"
	I0929 10:22:07.356716    8435 pod_ready.go:86] duration metric: took 398.652078ms for pod "kube-scheduler-addons-300979" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 10:22:07.356726    8435 pod_ready.go:40] duration metric: took 1.60314993s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 10:22:07.399954    8435 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 10:22:07.503114    8435 out.go:179] * Done! kubectl is now configured to use "addons-300979" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 29 10:24:52 addons-300979 crio[934]: time="2025-09-29 10:24:52.432774465Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-2pnc2/POD" id=92c51f5c-6493-45ae-8236-01f65eb0b9fe name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 10:24:52 addons-300979 crio[934]: time="2025-09-29 10:24:52.432858619Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 10:24:52 addons-300979 crio[934]: time="2025-09-29 10:24:52.460520093Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-2pnc2 Namespace:default ID:6f078aaa4b6ed8f8366659559adf5ce7c9f2f13ee352f5747f8f394e8d4fe93d UID:2018dafe-c788-4301-95cb-8fb525be98ce NetNS:/var/run/netns/f1bf792c-96d0-44f7-afb5-59155a34fd20 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 10:24:52 addons-300979 crio[934]: time="2025-09-29 10:24:52.460550043Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-2pnc2 to CNI network \"kindnet\" (type=ptp)"
	Sep 29 10:24:52 addons-300979 crio[934]: time="2025-09-29 10:24:52.470710877Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-2pnc2 Namespace:default ID:6f078aaa4b6ed8f8366659559adf5ce7c9f2f13ee352f5747f8f394e8d4fe93d UID:2018dafe-c788-4301-95cb-8fb525be98ce NetNS:/var/run/netns/f1bf792c-96d0-44f7-afb5-59155a34fd20 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 29 10:24:52 addons-300979 crio[934]: time="2025-09-29 10:24:52.470828099Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-2pnc2 for CNI network kindnet (type=ptp)"
	Sep 29 10:24:52 addons-300979 crio[934]: time="2025-09-29 10:24:52.471685442Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 10:24:52 addons-300979 crio[934]: time="2025-09-29 10:24:52.472843375Z" level=info msg="Ran pod sandbox 6f078aaa4b6ed8f8366659559adf5ce7c9f2f13ee352f5747f8f394e8d4fe93d with infra container: default/hello-world-app-5d498dc89-2pnc2/POD" id=92c51f5c-6493-45ae-8236-01f65eb0b9fe name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 29 10:24:52 addons-300979 crio[934]: time="2025-09-29 10:24:52.473977492Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=dd0d91c8-9dcf-484f-95df-7cb282fec86c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:24:52 addons-300979 crio[934]: time="2025-09-29 10:24:52.474150345Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=dd0d91c8-9dcf-484f-95df-7cb282fec86c name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:24:52 addons-300979 crio[934]: time="2025-09-29 10:24:52.474629459Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=4ebc9ba1-5a6b-410b-89b7-387ba81657d3 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:24:52 addons-300979 crio[934]: time="2025-09-29 10:24:52.479401218Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 29 10:24:52 addons-300979 crio[934]: time="2025-09-29 10:24:52.617554305Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 29 10:24:53 addons-300979 crio[934]: time="2025-09-29 10:24:53.011209093Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=4ebc9ba1-5a6b-410b-89b7-387ba81657d3 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:24:53 addons-300979 crio[934]: time="2025-09-29 10:24:53.011811268Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5ed965b8-5827-4c40-8898-02253721de60 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:24:53 addons-300979 crio[934]: time="2025-09-29 10:24:53.012429097Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=5ed965b8-5827-4c40-8898-02253721de60 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:24:53 addons-300979 crio[934]: time="2025-09-29 10:24:53.013205024Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9d4a5f24-ae13-46b7-8b9a-7201b18dc8ef name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:24:53 addons-300979 crio[934]: time="2025-09-29 10:24:53.013753843Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9d4a5f24-ae13-46b7-8b9a-7201b18dc8ef name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:24:53 addons-300979 crio[934]: time="2025-09-29 10:24:53.017311480Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-2pnc2/hello-world-app" id=bff2cd77-f3ee-4082-976e-244c4614d32b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 10:24:53 addons-300979 crio[934]: time="2025-09-29 10:24:53.017419416Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 10:24:53 addons-300979 crio[934]: time="2025-09-29 10:24:53.032686663Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/bf08b0fe3780a7e49d357705c3b67616f55bf779a927610153cadeb0e52dfc0f/merged/etc/passwd: no such file or directory"
	Sep 29 10:24:53 addons-300979 crio[934]: time="2025-09-29 10:24:53.032719034Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/bf08b0fe3780a7e49d357705c3b67616f55bf779a927610153cadeb0e52dfc0f/merged/etc/group: no such file or directory"
	Sep 29 10:24:53 addons-300979 crio[934]: time="2025-09-29 10:24:53.101347218Z" level=info msg="Created container 1393063fe1c6edfd7d42855fa557c3728632cb3fd6d4bbf51765c3fad26825d2: default/hello-world-app-5d498dc89-2pnc2/hello-world-app" id=bff2cd77-f3ee-4082-976e-244c4614d32b name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 10:24:53 addons-300979 crio[934]: time="2025-09-29 10:24:53.101942580Z" level=info msg="Starting container: 1393063fe1c6edfd7d42855fa557c3728632cb3fd6d4bbf51765c3fad26825d2" id=faf49b87-045c-4c0d-90c1-79b738541309 name=/runtime.v1.RuntimeService/StartContainer
	Sep 29 10:24:53 addons-300979 crio[934]: time="2025-09-29 10:24:53.108129660Z" level=info msg="Started container" PID=11943 containerID=1393063fe1c6edfd7d42855fa557c3728632cb3fd6d4bbf51765c3fad26825d2 description=default/hello-world-app-5d498dc89-2pnc2/hello-world-app id=faf49b87-045c-4c0d-90c1-79b738541309 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6f078aaa4b6ed8f8366659559adf5ce7c9f2f13ee352f5747f8f394e8d4fe93d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	1393063fe1c6e       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   6f078aaa4b6ed       hello-world-app-5d498dc89-2pnc2
	2b054b351791f       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   085fa86c398c0       nginx
	89d0d12d181c6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago            Running             busybox                   0                   3fa7de8f37c63       busybox
	89a1f5bb68238       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             2 minutes ago            Running             controller                0                   552f8f8e310a8       ingress-nginx-controller-9cc49f96f-kbg5k
	ea7798b467d7a       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            3 minutes ago            Running             gadget                    0                   30059c94208e1       gadget-pvpm8
	b5d96232ae0dc       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             3 minutes ago            Exited              patch                     2                   07dd6b0c4369f       ingress-nginx-admission-patch-clplv
	54fbb92131c5e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago            Exited              create                    0                   ee5819e33b2ff       ingress-nginx-admission-create-qblf7
	908ff74ad2673       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago            Running             minikube-ingress-dns      0                   1b856f06ecfb1       kube-ingress-dns-minikube
	956df6f89893f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago            Running             storage-provisioner       0                   09c72d1443c5e       storage-provisioner
	0a91443f6a271       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             3 minutes ago            Running             coredns                   0                   76705026056d8       coredns-66bc5c9577-bz57x
	b1176e4015a7e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                             4 minutes ago            Running             kindnet-cni               0                   49e6ee10ac30e       kindnet-tz5gq
	e107d7b8e3bb9       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             4 minutes ago            Running             kube-proxy                0                   9e5781e721339       kube-proxy-82n6s
	3e51bcece5e12       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             4 minutes ago            Running             kube-controller-manager   0                   c136cb87890ee       kube-controller-manager-addons-300979
	fff3278b83efc       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             4 minutes ago            Running             kube-scheduler            0                   54d55eedb02e1       kube-scheduler-addons-300979
	c40a699a2010b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             4 minutes ago            Running             kube-apiserver            0                   32e6c5045e075       kube-apiserver-addons-300979
	53da588ed4741       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago            Running             etcd                      0                   d564f7a254b53       etcd-addons-300979
	
	
	==> coredns [0a91443f6a271948c749acf3fed622225bdcb19dffe078c7931d2bb27f8e6083] <==
	[INFO] 10.244.0.14:54193 - 4370 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000131599s
	[INFO] 10.244.0.14:54873 - 47570 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000060991s
	[INFO] 10.244.0.14:54873 - 47801 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000098299s
	[INFO] 10.244.0.14:51280 - 30893 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000058094s
	[INFO] 10.244.0.14:51280 - 30677 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000050508s
	[INFO] 10.244.0.14:49892 - 11527 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111718s
	[INFO] 10.244.0.14:49892 - 11991 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000137152s
	[INFO] 10.244.0.22:59645 - 3635 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000205189s
	[INFO] 10.244.0.22:56775 - 48027 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000262111s
	[INFO] 10.244.0.22:41430 - 43204 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000127582s
	[INFO] 10.244.0.22:36533 - 1819 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135783s
	[INFO] 10.244.0.22:57982 - 52371 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000128784s
	[INFO] 10.244.0.22:39712 - 52090 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090317s
	[INFO] 10.244.0.22:44994 - 21436 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005009726s
	[INFO] 10.244.0.22:38947 - 14095 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.006600841s
	[INFO] 10.244.0.22:52600 - 31451 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.004722133s
	[INFO] 10.244.0.22:48087 - 47262 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00632671s
	[INFO] 10.244.0.22:44728 - 5034 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004318507s
	[INFO] 10.244.0.22:35114 - 38247 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005965456s
	[INFO] 10.244.0.22:44511 - 59145 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004823733s
	[INFO] 10.244.0.22:54526 - 2045 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.0056432s
	[INFO] 10.244.0.22:54912 - 55145 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001172561s
	[INFO] 10.244.0.22:51758 - 51163 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00265625s
	[INFO] 10.244.0.25:36839 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000181753s
	[INFO] 10.244.0.25:56997 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000197958s
	
	
	==> describe nodes <==
	Name:               addons-300979
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-300979
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=addons-300979
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_20_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-300979
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:20:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-300979
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:24:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:23:26 +0000   Mon, 29 Sep 2025 10:20:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:23:26 +0000   Mon, 29 Sep 2025 10:20:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:23:26 +0000   Mon, 29 Sep 2025 10:20:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:23:26 +0000   Mon, 29 Sep 2025 10:21:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-300979
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb6ab3a9a1534aa59cf8deb4792c88a9
	  System UUID:                60423002-dd81-4e20-b932-cb7bd40b7642
	  Boot ID:                    7892f883-017b-40ec-b18f-d6c900a242a7
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	  default                     hello-world-app-5d498dc89-2pnc2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-pvpm8                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-kbg5k    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m24s
	  kube-system                 coredns-66bc5c9577-bz57x                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m25s
	  kube-system                 etcd-addons-300979                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m31s
	  kube-system                 kindnet-tz5gq                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m25s
	  kube-system                 kube-apiserver-addons-300979                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-controller-manager-addons-300979       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-proxy-82n6s                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-scheduler-addons-300979                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m22s  kube-proxy       
	  Normal  Starting                 4m31s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m31s  kubelet          Node addons-300979 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m31s  kubelet          Node addons-300979 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m31s  kubelet          Node addons-300979 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m26s  node-controller  Node addons-300979 event: Registered Node addons-300979 in Controller
	  Normal  NodeReady                3m42s  kubelet          Node addons-300979 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.086355] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024748] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.061887] kauditd_printk_skb: 47 callbacks suppressed
	[Sep29 10:22] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +1.020394] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +1.023880] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +1.023888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +1.024917] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +1.022942] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +2.047856] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +4.031633] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +8.448356] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[Sep29 10:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[ +32.254439] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	
	
	==> etcd [53da588ed4741ff55c753194d8d1df3bbd6e85e3b668931746ce1d48e37ff5bc] <==
	{"level":"warn","ts":"2025-09-29T10:20:19.932390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:19.938163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:19.944686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:19.950688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:19.957275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:19.964314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:19.970943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:19.976624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:19.982621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:19.994239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:20.001085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:20.006962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:20.012809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:20.018763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:20.025475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:20.032096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:20.046612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:20.052611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:20.059057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:20.104956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:30.852588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:30.861406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:57.498427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:57.505326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:20:57.525353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54488","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:24:53 up 7 min,  0 users,  load average: 0.26, 0.62, 0.33
	Linux addons-300979 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [b1176e4015a7e2ab2be7671edf258a250d6ed26d8569cbb807122803449d8f67] <==
	I0929 10:22:51.022413       1 main.go:301] handling current node
	I0929 10:23:01.022929       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:23:01.022960       1 main.go:301] handling current node
	I0929 10:23:11.023002       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:23:11.023036       1 main.go:301] handling current node
	I0929 10:23:21.031146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:23:21.031182       1 main.go:301] handling current node
	I0929 10:23:31.026045       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:23:31.026079       1 main.go:301] handling current node
	I0929 10:23:41.025752       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:23:41.025796       1 main.go:301] handling current node
	I0929 10:23:51.024955       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:23:51.024994       1 main.go:301] handling current node
	I0929 10:24:01.026765       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:24:01.026811       1 main.go:301] handling current node
	I0929 10:24:11.026965       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:24:11.027003       1 main.go:301] handling current node
	I0929 10:24:21.030969       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:24:21.030999       1 main.go:301] handling current node
	I0929 10:24:31.023572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:24:31.023601       1 main.go:301] handling current node
	I0929 10:24:41.025856       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:24:41.025912       1 main.go:301] handling current node
	I0929 10:24:51.031026       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:24:51.031056       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c40a699a2010b7b9d1dcd36de2c5c6e1040d1ef1e6e6d9ecbe4f2dd3e6231e33] <==
	E0929 10:22:16.519845       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40538: use of closed network connection
	I0929 10:22:30.693772       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0929 10:22:30.853968       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.43.103"}
	I0929 10:22:36.848072       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.94.253"}
	I0929 10:22:43.358364       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:22:44.132382       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:22:50.193898       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0929 10:23:06.975891       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:23:06.975943       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:23:06.989618       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:23:06.989668       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:23:06.990474       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:23:06.990512       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:23:07.002111       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:23:07.002163       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0929 10:23:07.016828       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0929 10:23:07.016938       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0929 10:23:07.991467       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0929 10:23:08.017129       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0929 10:23:08.024336       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0929 10:23:20.848038       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0929 10:23:27.860974       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0929 10:23:55.398463       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:24:12.201374       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:24:52.199967       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.50.56"}
	
	
	==> kube-controller-manager [3e51bcece5e120b839d07441e5928504b7d487122b27d789ef5518101aae5061] <==
	E0929 10:23:16.087694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:23:17.312599       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:23:17.313486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:23:25.314224       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:23:25.315123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:23:25.662029       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:23:25.662784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0929 10:23:27.622264       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0929 10:23:27.622303       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:23:27.636437       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0929 10:23:27.636490       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 10:23:28.106738       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:23:28.107577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:23:38.819083       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:23:38.820054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:23:46.815350       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:23:46.816243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:23:50.461694       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:23:50.462647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:24:07.971254       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:24:07.972103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:24:16.939961       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:24:16.940849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0929 10:24:40.552389       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0929 10:24:40.554781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [e107d7b8e3bb995d056f52531b6c74acb36ce2591b116d2ba04156fc24bedebd] <==
	I0929 10:20:30.658417       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:20:30.718407       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:20:30.818754       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:20:30.818795       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:20:30.818942       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:20:30.840616       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:20:30.840691       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:20:30.845672       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:20:30.850197       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:20:30.850229       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:20:30.851654       1 config.go:200] "Starting service config controller"
	I0929 10:20:30.851680       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:20:30.851709       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:20:30.851716       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:20:30.851736       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:20:30.851745       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:20:30.851759       1 config.go:309] "Starting node config controller"
	I0929 10:20:30.851764       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:20:30.951864       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:20:30.951899       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 10:20:30.951910       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:20:30.951925       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fff3278b83efc142936e130ee0dc75c3a025e5df9c12175c0d27872508f9d219] <==
	E0929 10:20:20.494287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:20:20.494305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:20:20.494361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:20:20.494450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:20:20.494395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:20:20.494401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:20:20.494402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:20:20.494400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:20:20.494452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:20:20.494367       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 10:20:20.494496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:20:20.494534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:20:20.494543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 10:20:20.494543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:20:21.350592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 10:20:21.353557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 10:20:21.358363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:20:21.390709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:20:21.460985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 10:20:21.490338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:20:21.499511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:20:21.534979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 10:20:21.599357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:20:21.726620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I0929 10:20:23.492286       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:23:35 addons-300979 kubelet[1571]: I0929 10:23:35.846981    1571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fc7mm\" (UniqueName: \"kubernetes.io/projected/efe5f1d8-2555-4e7d-ae8c-7f1321191574-kube-api-access-fc7mm\") on node \"addons-300979\" DevicePath \"\""
	Sep 29 10:23:36 addons-300979 kubelet[1571]: I0929 10:23:36.286904    1571 scope.go:117] "RemoveContainer" containerID="2b005e228d43bd0d9328434f1898681fb9c97512b0c779a6c64ba8d3670fe135"
	Sep 29 10:23:36 addons-300979 kubelet[1571]: I0929 10:23:36.305466    1571 scope.go:117] "RemoveContainer" containerID="2b005e228d43bd0d9328434f1898681fb9c97512b0c779a6c64ba8d3670fe135"
	Sep 29 10:23:36 addons-300979 kubelet[1571]: E0929 10:23:36.305823    1571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b005e228d43bd0d9328434f1898681fb9c97512b0c779a6c64ba8d3670fe135\": container with ID starting with 2b005e228d43bd0d9328434f1898681fb9c97512b0c779a6c64ba8d3670fe135 not found: ID does not exist" containerID="2b005e228d43bd0d9328434f1898681fb9c97512b0c779a6c64ba8d3670fe135"
	Sep 29 10:23:36 addons-300979 kubelet[1571]: I0929 10:23:36.305863    1571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b005e228d43bd0d9328434f1898681fb9c97512b0c779a6c64ba8d3670fe135"} err="failed to get container status \"2b005e228d43bd0d9328434f1898681fb9c97512b0c779a6c64ba8d3670fe135\": rpc error: code = NotFound desc = could not find container \"2b005e228d43bd0d9328434f1898681fb9c97512b0c779a6c64ba8d3670fe135\": container with ID starting with 2b005e228d43bd0d9328434f1898681fb9c97512b0c779a6c64ba8d3670fe135 not found: ID does not exist"
	Sep 29 10:23:36 addons-300979 kubelet[1571]: I0929 10:23:36.645553    1571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe5f1d8-2555-4e7d-ae8c-7f1321191574" path="/var/lib/kubelet/pods/efe5f1d8-2555-4e7d-ae8c-7f1321191574/volumes"
	Sep 29 10:23:39 addons-300979 kubelet[1571]: I0929 10:23:39.643894    1571 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 29 10:23:42 addons-300979 kubelet[1571]: E0929 10:23:42.691240    1571 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141422691038157  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:23:42 addons-300979 kubelet[1571]: E0929 10:23:42.691269    1571 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141422691038157  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:23:52 addons-300979 kubelet[1571]: E0929 10:23:52.693629    1571 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141432693446201  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:23:52 addons-300979 kubelet[1571]: E0929 10:23:52.693669    1571 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141432693446201  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:24:02 addons-300979 kubelet[1571]: E0929 10:24:02.696016    1571 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141442695799018  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:24:02 addons-300979 kubelet[1571]: E0929 10:24:02.696055    1571 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141442695799018  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:24:12 addons-300979 kubelet[1571]: E0929 10:24:12.698557    1571 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141452698357540  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:24:12 addons-300979 kubelet[1571]: E0929 10:24:12.698587    1571 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141452698357540  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:24:22 addons-300979 kubelet[1571]: E0929 10:24:22.700965    1571 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141462700673232  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:24:22 addons-300979 kubelet[1571]: E0929 10:24:22.701000    1571 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141462700673232  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:24:32 addons-300979 kubelet[1571]: E0929 10:24:32.702929    1571 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141472702684977  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:24:32 addons-300979 kubelet[1571]: E0929 10:24:32.702958    1571 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141472702684977  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:24:42 addons-300979 kubelet[1571]: E0929 10:24:42.705068    1571 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141482704851835  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:24:42 addons-300979 kubelet[1571]: E0929 10:24:42.705096    1571 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141482704851835  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:24:52 addons-300979 kubelet[1571]: I0929 10:24:52.231427    1571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlk7v\" (UniqueName: \"kubernetes.io/projected/2018dafe-c788-4301-95cb-8fb525be98ce-kube-api-access-xlk7v\") pod \"hello-world-app-5d498dc89-2pnc2\" (UID: \"2018dafe-c788-4301-95cb-8fb525be98ce\") " pod="default/hello-world-app-5d498dc89-2pnc2"
	Sep 29 10:24:52 addons-300979 kubelet[1571]: E0929 10:24:52.707913    1571 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759141492707689247  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:24:52 addons-300979 kubelet[1571]: E0929 10:24:52.707945    1571 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759141492707689247  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 29 10:24:53 addons-300979 kubelet[1571]: I0929 10:24:53.463936    1571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-2pnc2" podStartSLOduration=0.925547004 podStartE2EDuration="1.463915085s" podCreationTimestamp="2025-09-29 10:24:52 +0000 UTC" firstStartedPulling="2025-09-29 10:24:52.474322302 +0000 UTC m=+269.905679712" lastFinishedPulling="2025-09-29 10:24:53.012690396 +0000 UTC m=+270.444047793" observedRunningTime="2025-09-29 10:24:53.463319236 +0000 UTC m=+270.894676655" watchObservedRunningTime="2025-09-29 10:24:53.463915085 +0000 UTC m=+270.895272502"
	
	
	==> storage-provisioner [956df6f89893f6ffed7c7446ef71960400b0d1812a0c104d0ad8980dcad0db15] <==
	W0929 10:24:28.907046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:30.909335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:30.913770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:32.916940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:32.920870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:34.923797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:34.928657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:36.932192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:36.935955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:38.938920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:38.943907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:40.946772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:40.950336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:42.953196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:42.957729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:44.960989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:44.964532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:46.968204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:46.973338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:48.976892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:48.980704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:50.983594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:50.988202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:52.992288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:24:52.996730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-300979 -n addons-300979
helpers_test.go:269: (dbg) Run:  kubectl --context addons-300979 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-qblf7 ingress-nginx-admission-patch-clplv
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-300979 describe pod ingress-nginx-admission-create-qblf7 ingress-nginx-admission-patch-clplv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-300979 describe pod ingress-nginx-admission-create-qblf7 ingress-nginx-admission-patch-clplv: exit status 1 (72.286395ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qblf7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-clplv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-300979 describe pod ingress-nginx-admission-create-qblf7 ingress-nginx-admission-patch-clplv: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-300979 addons disable ingress --alsologtostderr -v=1: (7.682783954s)
--- FAIL: TestAddons/parallel/Ingress (152.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-992924 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-992924 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-zjm8s" [93c1f421-d656-47cb-a0b3-da32b9797d40] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-992924 -n functional-992924
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-29 10:38:10.437891444 +0000 UTC m=+1119.852704203
functional_test.go:1645: (dbg) Run:  kubectl --context functional-992924 describe po hello-node-connect-7d85dfc575-zjm8s -n default
functional_test.go:1645: (dbg) kubectl --context functional-992924 describe po hello-node-connect-7d85dfc575-zjm8s -n default:
Name:             hello-node-connect-7d85dfc575-zjm8s
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-992924/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:28:10 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ql95t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ql95t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zjm8s to functional-992924
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m2s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m48s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-992924 logs hello-node-connect-7d85dfc575-zjm8s -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-992924 logs hello-node-connect-7d85dfc575-zjm8s -n default: exit status 1 (67.09231ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-zjm8s" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-992924 logs hello-node-connect-7d85dfc575-zjm8s -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-992924 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-zjm8s
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-992924/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:28:10 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ql95t (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-ql95t:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zjm8s to functional-992924
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m2s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m48s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m48s (x21 over 9m58s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-992924 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-992924 logs -l app=hello-node-connect: exit status 1 (58.658073ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-zjm8s" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-992924 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-992924 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.107.184.152
IPs:                      10.107.184.152
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30267/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-992924
helpers_test.go:243: (dbg) docker inspect functional-992924:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5f4fb26530460b6e17513cadb0ce50c820013a119f62d685f1079803c2c855e",
	        "Created": "2025-09-29T10:26:03.527357601Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33442,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T10:26:03.560151566Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/b5f4fb26530460b6e17513cadb0ce50c820013a119f62d685f1079803c2c855e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5f4fb26530460b6e17513cadb0ce50c820013a119f62d685f1079803c2c855e/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5f4fb26530460b6e17513cadb0ce50c820013a119f62d685f1079803c2c855e/hosts",
	        "LogPath": "/var/lib/docker/containers/b5f4fb26530460b6e17513cadb0ce50c820013a119f62d685f1079803c2c855e/b5f4fb26530460b6e17513cadb0ce50c820013a119f62d685f1079803c2c855e-json.log",
	        "Name": "/functional-992924",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-992924:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-992924",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b5f4fb26530460b6e17513cadb0ce50c820013a119f62d685f1079803c2c855e",
	                "LowerDir": "/var/lib/docker/overlay2/865b97266bdabcf27aae00d63701e872ad998f6664716015685bd39836534e92-init/diff:/var/lib/docker/overlay2/c7fa3299f755c710ae989985ad7ce5a1ce038c1f2be50e7356b276800d2744f7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/865b97266bdabcf27aae00d63701e872ad998f6664716015685bd39836534e92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/865b97266bdabcf27aae00d63701e872ad998f6664716015685bd39836534e92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/865b97266bdabcf27aae00d63701e872ad998f6664716015685bd39836534e92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-992924",
	                "Source": "/var/lib/docker/volumes/functional-992924/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-992924",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-992924",
	                "name.minikube.sigs.k8s.io": "functional-992924",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e5bb2251d509780f9578c60ce059db30beff0047a9862da4d1a93c7007e9634",
	            "SandboxKey": "/var/run/docker/netns/7e5bb2251d50",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-992924": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:20:2b:e0:3f:90",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7aa9214d1334933ae558255b7e2e350ef91f489658ab01f01bb59e09f8faea24",
	                    "EndpointID": "193825c1c0cf56e6bfe40e7a3fe457ec95ecf9f2cb490ef3b4bd0dae6fbbe278",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-992924",
	                        "b5f4fb265304"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-992924 -n functional-992924
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-992924 logs -n 25: (1.42788654s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-992924 ssh sudo cat /etc/ssl/certs/7117.pem                                                                     │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ ssh            │ functional-992924 ssh sudo cat /usr/share/ca-certificates/7117.pem                                                         │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ ssh            │ functional-992924 ssh sudo cat /etc/ssl/certs/51391683.0                                                                   │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ ssh            │ functional-992924 ssh sudo cat /etc/ssl/certs/71172.pem                                                                    │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ ssh            │ functional-992924 ssh sudo cat /usr/share/ca-certificates/71172.pem                                                        │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ ssh            │ functional-992924 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                   │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ start          │ -p functional-992924 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                  │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │                     │
	│ start          │ -p functional-992924 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                            │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-992924 --alsologtostderr -v=1                                                             │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ cp             │ functional-992924 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ ssh            │ functional-992924 ssh -n functional-992924 sudo cat /home/docker/cp-test.txt                                               │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ cp             │ functional-992924 cp functional-992924:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1837202559/001/cp-test.txt │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ ssh            │ functional-992924 ssh -n functional-992924 sudo cat /home/docker/cp-test.txt                                               │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ cp             │ functional-992924 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ ssh            │ functional-992924 ssh -n functional-992924 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ image          │ functional-992924 image ls --format short --alsologtostderr                                                                │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ image          │ functional-992924 image ls --format yaml --alsologtostderr                                                                 │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ ssh            │ functional-992924 ssh pgrep buildkitd                                                                                      │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │                     │
	│ image          │ functional-992924 image build -t localhost/my-image:functional-992924 testdata/build --alsologtostderr                     │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ image          │ functional-992924 image ls                                                                                                 │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ image          │ functional-992924 image ls --format json --alsologtostderr                                                                 │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ image          │ functional-992924 image ls --format table --alsologtostderr                                                                │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ update-context │ functional-992924 update-context --alsologtostderr -v=2                                                                    │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ update-context │ functional-992924 update-context --alsologtostderr -v=2                                                                    │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	│ update-context │ functional-992924 update-context --alsologtostderr -v=2                                                                    │ functional-992924 │ jenkins │ v1.37.0 │ 29 Sep 25 10:28 UTC │ 29 Sep 25 10:28 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:28:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:28:30.620659   48755 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:28:30.620770   48755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:28:30.620778   48755 out.go:374] Setting ErrFile to fd 2...
	I0929 10:28:30.620783   48755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:28:30.620998   48755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
	I0929 10:28:30.621419   48755 out.go:368] Setting JSON to false
	I0929 10:28:30.622300   48755 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":655,"bootTime":1759141056,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:28:30.622384   48755 start.go:140] virtualization: kvm guest
	I0929 10:28:30.624226   48755 out.go:179] * [functional-992924] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:28:30.625602   48755 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:28:30.625620   48755 notify.go:220] Checking for updates...
	I0929 10:28:30.629330   48755 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:28:30.630847   48755 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig
	I0929 10:28:30.631969   48755 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube
	I0929 10:28:30.633025   48755 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:28:30.634052   48755 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:28:30.635414   48755 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:28:30.635862   48755 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:28:30.659033   48755 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:28:30.659125   48755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:28:30.712809   48755 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:28:30.702255615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:28:30.712959   48755 docker.go:318] overlay module found
	I0929 10:28:30.714660   48755 out.go:179] * Using the docker driver based on existing profile
	I0929 10:28:30.715944   48755 start.go:304] selected driver: docker
	I0929 10:28:30.715958   48755 start.go:924] validating driver "docker" against &{Name:functional-992924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-992924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:28:30.716057   48755 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:28:30.716149   48755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:28:30.768633   48755 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:28:30.759657085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:28:30.769465   48755 cni.go:84] Creating CNI manager for ""
	I0929 10:28:30.769534   48755 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:28:30.769597   48755 start.go:348] cluster config:
	{Name:functional-992924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-992924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:28:30.771493   48755 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.304718199Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.304759771Z" level=info msg="Removed pod sandbox: 04e3a0745f8ddb7fce5cea69ed5d1f3d0a32ccb7ae1e751fd2ca61b6752f28d1" id=81334a72-e635-4222-8f62-8d6c04731e6c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.305288151Z" level=info msg="Stopping pod sandbox: eaa9a2ce12f49a2abf001fc4d811a772cf1fef7afc4e984f8d28157cb0ad5410" id=b9df41e5-e869-4577-be6a-286f239ce8a7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.305323535Z" level=info msg="Stopped pod sandbox (already stopped): eaa9a2ce12f49a2abf001fc4d811a772cf1fef7afc4e984f8d28157cb0ad5410" id=b9df41e5-e869-4577-be6a-286f239ce8a7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.305669194Z" level=info msg="Removing pod sandbox: eaa9a2ce12f49a2abf001fc4d811a772cf1fef7afc4e984f8d28157cb0ad5410" id=d35a6b10-1f9b-44a6-9510-710589d4dffc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.322592078Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.322630111Z" level=info msg="Removed pod sandbox: eaa9a2ce12f49a2abf001fc4d811a772cf1fef7afc4e984f8d28157cb0ad5410" id=d35a6b10-1f9b-44a6-9510-710589d4dffc name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.357297327Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb" id=b5ccea85-ea5e-427a-90ef-8c1f80972963 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.357928167Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=ae200d4f-e6a7-4119-839b-c2b5a221bd7a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.359080825Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,RepoTags:[docker.io/library/mysql:5.7],RepoDigests:[docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da],Size_:519571821,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ae200d4f-e6a7-4119-839b-c2b5a221bd7a name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.359800448Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=c7c588ae-6c4b-4a07-8da2-3005f0a9d600 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.361173674Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,RepoTags:[docker.io/library/mysql:5.7],RepoDigests:[docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da],Size_:519571821,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c7c588ae-6c4b-4a07-8da2-3005f0a9d600 name=/runtime.v1.ImageService/ImageStatus
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.364511148Z" level=info msg="Creating container: default/mysql-5bb876957f-w26jj/mysql" id=fc0ae3b0-589e-445d-9253-605aaafb5a89 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.364617304Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.435989835Z" level=info msg="Created container 27750ffa2e9809f45223d3a700581bc38cc91242f918c6a7f38cf9bf4fc5c523: default/mysql-5bb876957f-w26jj/mysql" id=fc0ae3b0-589e-445d-9253-605aaafb5a89 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.436626651Z" level=info msg="Starting container: 27750ffa2e9809f45223d3a700581bc38cc91242f918c6a7f38cf9bf4fc5c523" id=e8c400fa-b6f2-4dd5-b66d-45b5fbe85e8d name=/runtime.v1.RuntimeService/StartContainer
	Sep 29 10:28:42 functional-992924 crio[4219]: time="2025-09-29 10:28:42.443916812Z" level=info msg="Started container" PID=8507 containerID=27750ffa2e9809f45223d3a700581bc38cc91242f918c6a7f38cf9bf4fc5c523 description=default/mysql-5bb876957f-w26jj/mysql id=e8c400fa-b6f2-4dd5-b66d-45b5fbe85e8d name=/runtime.v1.RuntimeService/StartContainer sandboxID=99cdb19648b6d0b5086d69e9c104e5533427ccf1f4882281114a8f2037a298cc
	Sep 29 10:28:50 functional-992924 crio[4219]: time="2025-09-29 10:28:50.101317037Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b7e5e129-efe5-4f95-98f2-bc8613aba79a name=/runtime.v1.ImageService/PullImage
	Sep 29 10:28:59 functional-992924 crio[4219]: time="2025-09-29 10:28:59.100103338Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5f311f13-e5e4-461d-8589-74258030575b name=/runtime.v1.ImageService/PullImage
	Sep 29 10:29:38 functional-992924 crio[4219]: time="2025-09-29 10:29:38.101008730Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b42c1fd4-5645-428a-9b15-aadd58ad2f63 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:29:42 functional-992924 crio[4219]: time="2025-09-29 10:29:42.100998221Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a80305f9-72c8-4b77-97e7-d2eeb9013ce7 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:31:08 functional-992924 crio[4219]: time="2025-09-29 10:31:08.100555474Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f4c33ef2-08bd-4ef9-9f5e-9d532b1eb05f name=/runtime.v1.ImageService/PullImage
	Sep 29 10:31:12 functional-992924 crio[4219]: time="2025-09-29 10:31:12.100369905Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c2fbc4bb-3421-4daf-81c3-897570da63aa name=/runtime.v1.ImageService/PullImage
	Sep 29 10:34:00 functional-992924 crio[4219]: time="2025-09-29 10:34:00.100786445Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=91d38f87-f174-44fa-b050-fe1757db22b4 name=/runtime.v1.ImageService/PullImage
	Sep 29 10:34:03 functional-992924 crio[4219]: time="2025-09-29 10:34:03.100212498Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8c5b338e-ef56-4f9c-b42c-a454cc2fb015 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	27750ffa2e980       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  9 minutes ago       Running             mysql                       0                   99cdb19648b6d       mysql-5bb876957f-w26jj
	5a6bf85e545f6       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   f167eb6b6729e       dashboard-metrics-scraper-77bf4d6c4c-2z5gz
	d7c461c05e1d0       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         9 minutes ago       Running             kubernetes-dashboard        0                   f8979347f2f6b       kubernetes-dashboard-855c9754f9-sp4lm
	c5e173e9cf0c3       docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285                  9 minutes ago       Running             myfrontend                  0                   1de9d1c16a260       sp-pod
	15d596b346f44       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              9 minutes ago       Exited              mount-munger                0                   dc84dacd26c90       busybox-mount
	433ea64c420aa       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                  10 minutes ago      Running             nginx                       0                   5eb27ce152f79       nginx-svc
	b16b9fb3dd3af       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   426b997d76d72       storage-provisioner
	2b77284b91bc6       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                 10 minutes ago      Running             kube-apiserver              0                   623c0bd6ab4de       kube-apiserver-functional-992924
	e33225047ea10       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 10 minutes ago      Running             kube-scheduler              1                   273082a933b73       kube-scheduler-functional-992924
	f69279e1ca2ff       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 10 minutes ago      Running             kube-controller-manager     1                   8a13827ad8808       kube-controller-manager-functional-992924
	661b0829e1636       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   7b33f06210c37       etcd-functional-992924
	5b808a56a49c3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   4e04d706cd7dc       coredns-66bc5c9577-gfwvv
	828162eeceaef       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 10 minutes ago      Running             kube-proxy                  1                   9452bdc1822c1       kube-proxy-7tjnw
	6a8ddccc73092       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Exited              storage-provisioner         1                   426b997d76d72       storage-provisioner
	344a389d7cfa6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   b44dfcacab19a       kindnet-5wt44
	31001f2893895       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   4e04d706cd7dc       coredns-66bc5c9577-gfwvv
	2e0a2bb7c59dc       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 11 minutes ago      Exited              kube-proxy                  0                   9452bdc1822c1       kube-proxy-7tjnw
	7bbda7f413fd1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   b44dfcacab19a       kindnet-5wt44
	5f8b1f6c25249       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 11 minutes ago      Exited              kube-controller-manager     0                   8a13827ad8808       kube-controller-manager-functional-992924
	11265eccdf0be       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 11 minutes ago      Exited              kube-scheduler              0                   273082a933b73       kube-scheduler-functional-992924
	4e33ab222a4a7       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   7b33f06210c37       etcd-functional-992924
	
	
	==> coredns [31001f28938952963fd30adc03103fbdd7474cb92ad4424385b4050bc375e23c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42079 - 16625 "HINFO IN 3221423978783769580.2148686221803173403. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016328222s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5b808a56a49c36f60b3a3373bea603b963a7d53d5823f5c4e162aa34279f21ed] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33516 - 26130 "HINFO IN 731344603281224822.813293471524691955. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.01488662s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-992924
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-992924
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c703192fb7638284bed1945941837d6f5d9e8170
	                    minikube.k8s.io/name=functional-992924
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T10_26_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 10:26:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-992924
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 10:38:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 10:35:42 +0000   Mon, 29 Sep 2025 10:26:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 10:35:42 +0000   Mon, 29 Sep 2025 10:26:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 10:35:42 +0000   Mon, 29 Sep 2025 10:26:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 10:35:42 +0000   Mon, 29 Sep 2025 10:27:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-992924
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 e37046b1abaa4bf5a8e096f17bea9d47
	  System UUID:                d6ec61f1-aec9-411c-b9e2-0b2bddb0d842
	  Boot ID:                    7892f883-017b-40ec-b18f-d6c900a242a7
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-9xvxk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  default                     hello-node-connect-7d85dfc575-zjm8s           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-w26jj                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m38s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 coredns-66bc5c9577-gfwvv                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-992924                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-5wt44                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-992924              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-992924     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-7tjnw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-992924              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-2z5gz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-sp4lm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-992924 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-992924 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-992924 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-992924 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-992924 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-992924 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-992924 event: Registered Node functional-992924 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-992924 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-992924 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-992924 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-992924 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-992924 event: Registered Node functional-992924 in Controller
	
	
	==> dmesg <==
	[  +0.086355] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024748] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.061887] kauditd_printk_skb: 47 callbacks suppressed
	[Sep29 10:22] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +1.020394] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +1.023880] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +1.023888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +1.024917] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +1.022942] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +2.047856] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +4.031633] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[  +8.448356] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[Sep29 10:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	[ +32.254439] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 eb b1 06 0b 88 56 02 0b 2a d3 31 08 00
	
	
	==> etcd [4e33ab222a4a760f2cc274756ba146873fb46c1e4cc92daeb38482031a0fb26f] <==
	{"level":"warn","ts":"2025-09-29T10:26:15.103213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:26:15.109162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:26:15.128078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:26:15.131408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:26:15.137137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:26:15.142995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:26:15.188252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37030","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:27:39.682993Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T10:27:39.683090Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-992924","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T10:27:39.683169Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:27:39.684701Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T10:27:39.686071Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:27:39.686129Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-29T10:27:39.686157Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:27:39.686145Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T10:27:39.686202Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T10:27:39.686198Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T10:27:39.686204Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-09-29T10:27:39.686217Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T10:27:39.686191Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T10:27:39.686242Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:27:39.688020Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T10:27:39.688070Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T10:27:39.688100Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T10:27:39.688125Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-992924","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [661b0829e1636277a84254456e2dafa5efaf0032fb565f3ecb77f11fc2454cfe] <==
	{"level":"warn","ts":"2025-09-29T10:27:42.383803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.390417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.397588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.404780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.411842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.418789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.426480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.432765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.440388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.447682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.454815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.460728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.468413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.475370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.490330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.496895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.503115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T10:27:42.551483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36502","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T10:28:43.454233Z","caller":"traceutil/trace.go:172","msg":"trace[592246934] linearizableReadLoop","detail":"{readStateIndex:925; appliedIndex:925; }","duration":"112.61108ms","start":"2025-09-29T10:28:43.341597Z","end":"2025-09-29T10:28:43.454208Z","steps":["trace[592246934] 'read index received'  (duration: 112.603055ms)","trace[592246934] 'applied index is now lower than readState.Index'  (duration: 6.94µs)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T10:28:43.454347Z","caller":"traceutil/trace.go:172","msg":"trace[1224978234] transaction","detail":"{read_only:false; response_revision:859; number_of_response:1; }","duration":"164.256035ms","start":"2025-09-29T10:28:43.290077Z","end":"2025-09-29T10:28:43.454333Z","steps":["trace[1224978234] 'process raft request'  (duration: 164.15502ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T10:28:43.454376Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.755328ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T10:28:43.454451Z","caller":"traceutil/trace.go:172","msg":"trace[741949040] range","detail":"{range_begin:/registry/runtimeclasses; range_end:; response_count:0; response_revision:859; }","duration":"112.849672ms","start":"2025-09-29T10:28:43.341589Z","end":"2025-09-29T10:28:43.454438Z","steps":["trace[741949040] 'agreement among raft nodes before linearized reading'  (duration: 112.710297ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T10:37:42.055520Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1148}
	{"level":"info","ts":"2025-09-29T10:37:42.074136Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1148,"took":"18.237989ms","hash":926074192,"current-db-size-bytes":3305472,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1495040,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-09-29T10:37:42.074175Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":926074192,"revision":1148,"compact-revision":-1}
	
	
	==> kernel <==
	 10:38:12 up 20 min,  0 users,  load average: 0.03, 0.19, 0.30
	Linux functional-992924 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [344a389d7cfa6731409938a2efd36b73d9b9513522610b9177cd02ba63f8b013] <==
	I0929 10:36:09.824263       1 main.go:301] handling current node
	I0929 10:36:19.823140       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:36:19.823191       1 main.go:301] handling current node
	I0929 10:36:29.829969       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:36:29.829999       1 main.go:301] handling current node
	I0929 10:36:39.824939       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:36:39.825010       1 main.go:301] handling current node
	I0929 10:36:49.822730       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:36:49.822769       1 main.go:301] handling current node
	I0929 10:36:59.823139       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:36:59.823180       1 main.go:301] handling current node
	I0929 10:37:09.822558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:37:09.822642       1 main.go:301] handling current node
	I0929 10:37:19.828740       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:37:19.828773       1 main.go:301] handling current node
	I0929 10:37:29.825981       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:37:29.826017       1 main.go:301] handling current node
	I0929 10:37:39.823230       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:37:39.823263       1 main.go:301] handling current node
	I0929 10:37:49.822308       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:37:49.822353       1 main.go:301] handling current node
	I0929 10:37:59.823323       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:37:59.823360       1 main.go:301] handling current node
	I0929 10:38:09.822357       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:38:09.822408       1 main.go:301] handling current node
	
	
	==> kindnet [7bbda7f413fd1224dbde28bd75760a2abf6f9e9086b9ce6a9fd7c8951c5bcb21] <==
	I0929 10:26:23.878006       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0929 10:26:23.878502       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0929 10:26:23.878699       1 main.go:148] setting mtu 1500 for CNI 
	I0929 10:26:23.878736       1 main.go:178] kindnetd IP family: "ipv4"
	I0929 10:26:23.878766       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-29T10:26:24Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0929 10:26:24.073870       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0929 10:26:24.073961       1 controller.go:381] "Waiting for informer caches to sync"
	I0929 10:26:24.073985       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0929 10:26:24.074114       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0929 10:26:54.075279       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0929 10:26:54.075326       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0929 10:26:54.075352       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E0929 10:26:54.075457       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I0929 10:26:55.574653       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0929 10:26:55.574690       1 metrics.go:72] Registering metrics
	I0929 10:26:55.574773       1 controller.go:711] "Syncing nftables rules"
	I0929 10:27:04.078974       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:27:04.079014       1 main.go:301] handling current node
	I0929 10:27:14.081965       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:27:14.082006       1 main.go:301] handling current node
	I0929 10:27:24.077661       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 10:27:24.077698       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2b77284b91bc66ecf43fb1dd7cfef6795315bdf2dc0a8b96db111718a0ec0b5a] <==
	E0929 10:28:26.260826       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:52014: use of closed network connection
	I0929 10:28:31.621845       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 10:28:31.732863       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.40.148"}
	I0929 10:28:31.742053       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.219.209"}
	E0929 10:28:33.501539       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:34068: use of closed network connection
	I0929 10:28:33.630782       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.203.77"}
	E0929 10:28:49.777220       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49032: use of closed network connection
	E0929 10:28:50.496571       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49056: use of closed network connection
	I0929 10:28:51.540517       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:29:01.135277       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:29:53.580350       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:30:25.479821       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:30:57.485094       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:31:27.160601       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:32:00.349624       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:32:56.745527       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:33:16.319291       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:34:24.657339       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:34:33.031292       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:35:26.970689       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:35:39.755316       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:36:36.121496       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:37:01.528679       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 10:37:42.939459       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 10:37:55.866588       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [5f8b1f6c25249a6a2c103277962bcd571cd993f4798a7fbc5ca8a1d5524548fc] <==
	I0929 10:26:22.679452       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 10:26:22.679459       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 10:26:22.679458       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 10:26:22.679518       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 10:26:22.679562       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 10:26:22.679519       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 10:26:22.679638       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-992924"
	I0929 10:26:22.679685       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0929 10:26:22.679744       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 10:26:22.679744       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 10:26:22.679996       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 10:26:22.680128       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 10:26:22.680148       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 10:26:22.680573       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 10:26:22.680603       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 10:26:22.680718       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 10:26:22.682226       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 10:26:22.683389       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:26:22.683390       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 10:26:22.683396       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 10:26:22.683477       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 10:26:22.684643       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 10:26:22.690835       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 10:26:22.699287       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 10:27:07.686018       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [f69279e1ca2ff65b5523782d5f8dc62d42572461fc2121aba13789ae59ec2d93] <==
	I0929 10:27:46.335936       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 10:27:46.338181       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 10:27:46.340435       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 10:27:46.341538       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 10:27:46.345917       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 10:27:46.347074       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 10:27:46.347093       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 10:27:46.347131       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 10:27:46.347177       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 10:27:46.347205       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 10:27:46.347244       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 10:27:46.347250       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 10:27:46.347455       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 10:27:46.347524       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 10:27:46.347488       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 10:27:46.347527       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 10:27:46.350401       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 10:27:46.351627       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 10:27:46.368150       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 10:28:31.679413       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:28:31.683061       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:28:31.685537       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:28:31.689220       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:28:31.690556       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 10:28:31.693574       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [2e0a2bb7c59dce43dca83774af44941c0f93307cc9f8d5045fdae20065288424] <==
	I0929 10:26:23.803602       1 server_linux.go:53] "Using iptables proxy"
	I0929 10:26:23.867508       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 10:26:23.968646       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:26:23.968689       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:26:23.968777       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:26:23.994829       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:26:23.994898       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:26:24.000842       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:26:24.001300       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:26:24.001382       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:26:24.004725       1 config.go:200] "Starting service config controller"
	I0929 10:26:24.004755       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:26:24.004834       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:26:24.004840       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:26:24.004862       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:26:24.004868       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:26:24.005039       1 config.go:309] "Starting node config controller"
	I0929 10:26:24.005083       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:26:24.104943       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:26:24.105142       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:26:24.105072       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 10:26:24.105044       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [828162eeceaefd3fe9c8eaff8a85c743049e9126ad8e1dc82003a52c73960b4a] <==
	I0929 10:27:29.509663       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 10:27:29.510614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-992924&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:27:31.017889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-992924&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:27:33.107465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-992924&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:27:39.201633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-992924&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0929 10:27:47.410099       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 10:27:47.410130       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 10:27:47.410199       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 10:27:47.428417       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 10:27:47.428478       1 server_linux.go:132] "Using iptables Proxier"
	I0929 10:27:47.434019       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 10:27:47.434812       1 server.go:527] "Version info" version="v1.34.0"
	I0929 10:27:47.434838       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:27:47.436742       1 config.go:200] "Starting service config controller"
	I0929 10:27:47.436759       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 10:27:47.436805       1 config.go:106] "Starting endpoint slice config controller"
	I0929 10:27:47.436822       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 10:27:47.436853       1 config.go:309] "Starting node config controller"
	I0929 10:27:47.436868       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 10:27:47.436868       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 10:27:47.436897       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 10:27:47.436890       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 10:27:47.537750       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 10:27:47.537783       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 10:27:47.537799       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [11265eccdf0be4683787f2fd852d6e84e5b46a428ccbb77751b40a937109b2ed] <==
	E0929 10:26:15.616069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 10:26:15.616124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 10:26:15.616163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:26:15.616179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:26:15.616254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 10:26:15.616299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 10:26:15.616302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 10:26:15.616327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:26:16.590563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 10:26:16.627138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 10:26:16.673192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 10:26:16.700309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 10:26:16.722970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 10:26:16.769075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 10:26:16.774248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 10:26:16.794470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 10:26:16.830772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 10:26:16.835890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I0929 10:26:19.912693       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:27:39.547242       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 10:27:39.547232       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:27:39.547311       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 10:27:39.547326       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 10:27:39.547343       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 10:27:39.547362       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e33225047ea10504c299b2defded8eada5a7945f9ec90a828fcfbfa7669a60f2] <==
	I0929 10:27:42.219006       1 serving.go:386] Generated self-signed cert in-memory
	W0929 10:27:42.913379       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 10:27:42.913471       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 10:27:42.913487       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 10:27:42.913497       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 10:27:42.944673       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 10:27:42.944709       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 10:27:42.946772       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:27:42.946805       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 10:27:42.947207       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 10:27:42.947490       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 10:27:43.047918       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 10:37:05 functional-992924 kubelet[5173]: E0929 10:37:05.100738    5173 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-9xvxk" podUID="a59b4519-1efe-4d8f-871b-966656355480"
	Sep 29 10:37:09 functional-992924 kubelet[5173]: E0929 10:37:09.100183    5173 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-zjm8s" podUID="93c1f421-d656-47cb-a0b3-da32b9797d40"
	Sep 29 10:37:11 functional-992924 kubelet[5173]: E0929 10:37:11.205899    5173 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142231205582208  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 29 10:37:11 functional-992924 kubelet[5173]: E0929 10:37:11.205935    5173 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142231205582208  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 29 10:37:17 functional-992924 kubelet[5173]: E0929 10:37:17.100617    5173 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-9xvxk" podUID="a59b4519-1efe-4d8f-871b-966656355480"
	Sep 29 10:37:21 functional-992924 kubelet[5173]: E0929 10:37:21.207292    5173 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142241207071507  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 29 10:37:21 functional-992924 kubelet[5173]: E0929 10:37:21.207328    5173 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142241207071507  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 29 10:37:22 functional-992924 kubelet[5173]: E0929 10:37:22.099670    5173 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-zjm8s" podUID="93c1f421-d656-47cb-a0b3-da32b9797d40"
	Sep 29 10:37:29 functional-992924 kubelet[5173]: E0929 10:37:29.100604    5173 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-9xvxk" podUID="a59b4519-1efe-4d8f-871b-966656355480"
	Sep 29 10:37:31 functional-992924 kubelet[5173]: E0929 10:37:31.208783    5173 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142251208533757  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 29 10:37:31 functional-992924 kubelet[5173]: E0929 10:37:31.208824    5173 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142251208533757  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 29 10:37:36 functional-992924 kubelet[5173]: E0929 10:37:36.100607    5173 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-zjm8s" podUID="93c1f421-d656-47cb-a0b3-da32b9797d40"
	Sep 29 10:37:41 functional-992924 kubelet[5173]: E0929 10:37:41.211244    5173 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142261210965239  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 29 10:37:41 functional-992924 kubelet[5173]: E0929 10:37:41.211283    5173 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142261210965239  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 29 10:37:42 functional-992924 kubelet[5173]: E0929 10:37:42.100717    5173 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-9xvxk" podUID="a59b4519-1efe-4d8f-871b-966656355480"
	Sep 29 10:37:50 functional-992924 kubelet[5173]: E0929 10:37:50.100112    5173 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-zjm8s" podUID="93c1f421-d656-47cb-a0b3-da32b9797d40"
	Sep 29 10:37:51 functional-992924 kubelet[5173]: E0929 10:37:51.213309    5173 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142271213089987  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 29 10:37:51 functional-992924 kubelet[5173]: E0929 10:37:51.213337    5173 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142271213089987  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 29 10:37:56 functional-992924 kubelet[5173]: E0929 10:37:56.100045    5173 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-9xvxk" podUID="a59b4519-1efe-4d8f-871b-966656355480"
	Sep 29 10:38:01 functional-992924 kubelet[5173]: E0929 10:38:01.215270    5173 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142281214972955  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 29 10:38:01 functional-992924 kubelet[5173]: E0929 10:38:01.215325    5173 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142281214972955  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 29 10:38:02 functional-992924 kubelet[5173]: E0929 10:38:02.099769    5173 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-zjm8s" podUID="93c1f421-d656-47cb-a0b3-da32b9797d40"
	Sep 29 10:38:09 functional-992924 kubelet[5173]: E0929 10:38:09.100493    5173 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-9xvxk" podUID="a59b4519-1efe-4d8f-871b-966656355480"
	Sep 29 10:38:11 functional-992924 kubelet[5173]: E0929 10:38:11.217344    5173 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1759142291217066780  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 29 10:38:11 functional-992924 kubelet[5173]: E0929 10:38:11.217379    5173 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1759142291217066780  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	
	
	==> kubernetes-dashboard [d7c461c05e1d062136a82004bb56bff587b9365e51fbe610f879cb0ac9dd6e87] <==
	2025/09/29 10:28:35 Using namespace: kubernetes-dashboard
	2025/09/29 10:28:35 Using in-cluster config to connect to apiserver
	2025/09/29 10:28:35 Using secret token for csrf signing
	2025/09/29 10:28:35 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/29 10:28:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/29 10:28:35 Successful initial request to the apiserver, version: v1.34.0
	2025/09/29 10:28:35 Generating JWE encryption key
	2025/09/29 10:28:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/29 10:28:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/29 10:28:35 Initializing JWE encryption key from synchronized object
	2025/09/29 10:28:35 Creating in-cluster Sidecar client
	2025/09/29 10:28:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/29 10:28:35 Serving insecurely on HTTP port: 9090
	2025/09/29 10:29:05 Successful request to sidecar
	2025/09/29 10:28:35 Starting overwatch
	
	
	==> storage-provisioner [6a8ddccc7309299fc33c9e7ac467499addd29a70a059903c8779d6a34f4ec359] <==
	I0929 10:27:29.391975       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 10:27:29.393236       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [b16b9fb3dd3af1f528d90281f9b02a0295025a665aa4c6bcac104b0305702b64] <==
	W0929 10:37:47.060493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:37:49.063418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:37:49.067220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:37:51.069772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:37:51.073335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:37:53.076857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:37:53.081665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:37:55.085158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:37:55.089165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:37:57.092226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:37:57.096326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:37:59.099417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:37:59.103841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:38:01.107513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:38:01.112226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:38:03.114979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:38:03.118862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:38:05.122442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:38:05.126393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:38:07.130739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:38:07.134976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:38:09.136928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:38:09.141306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:38:11.144332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 10:38:11.148420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-992924 -n functional-992924
helpers_test.go:269: (dbg) Run:  kubectl --context functional-992924 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-9xvxk hello-node-connect-7d85dfc575-zjm8s
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-992924 describe pod busybox-mount hello-node-75c85bcc94-9xvxk hello-node-connect-7d85dfc575-zjm8s
helpers_test.go:290: (dbg) kubectl --context functional-992924 describe pod busybox-mount hello-node-75c85bcc94-9xvxk hello-node-connect-7d85dfc575-zjm8s:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-992924/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:28:19 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://15d596b346f44608b30d8c53220b0c9f12106dcd403a742306b2ced55fd049e3
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 10:28:22 +0000
	      Finished:     Mon, 29 Sep 2025 10:28:22 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7bn2v (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-7bn2v:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m53s  default-scheduler  Successfully assigned default/busybox-mount to functional-992924
	  Normal  Pulling    9m53s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m50s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.218s (2.218s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m50s  kubelet            Created container: mount-munger
	  Normal  Started    9m50s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-9xvxk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-992924/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:28:14 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l2r9v (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l2r9v:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m58s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9xvxk to functional-992924
	  Normal   Pulling    7m (x5 over 9m57s)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m (x5 over 9m57s)      kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m (x5 over 9m57s)      kubelet            Error: ErrImagePull
	  Warning  Failed     4m53s (x20 over 9m57s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m39s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-zjm8s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-992924/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 10:28:10 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ql95t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ql95t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zjm8s to functional-992924
	  Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m50s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m50s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-992924 image ls --format short --alsologtostderr: (2.247544074s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992924 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992924 image ls --format short --alsologtostderr:
I0929 10:28:38.261986   50110 out.go:360] Setting OutFile to fd 1 ...
I0929 10:28:38.262257   50110 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:28:38.262273   50110 out.go:374] Setting ErrFile to fd 2...
I0929 10:28:38.262279   50110 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:28:38.262594   50110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
I0929 10:28:38.263463   50110 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:28:38.263607   50110 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:28:38.264198   50110 cli_runner.go:164] Run: docker container inspect functional-992924 --format={{.State.Status}}
I0929 10:28:38.288018   50110 ssh_runner.go:195] Run: systemctl --version
I0929 10:28:38.288081   50110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992924
I0929 10:28:38.310986   50110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/functional-992924/id_rsa Username:docker}
I0929 10:28:38.406217   50110 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 10:28:40.447256   50110 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.041005837s)
W0929 10:28:40.447330   50110 cache_images.go:735] Failed to list images for profile functional-992924 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E0929 10:28:40.444173    8346 remote_image.go:136] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},},}"
time="2025-09-29T10:28:40Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-992924 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-992924 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-9xvxk" [a59b4519-1efe-4d8f-871b-966656355480] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-992924 -n functional-992924
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-29 10:38:15.130592819 +0000 UTC m=+1124.545405557
functional_test.go:1460: (dbg) Run:  kubectl --context functional-992924 describe po hello-node-75c85bcc94-9xvxk -n default
functional_test.go:1460: (dbg) kubectl --context functional-992924 describe po hello-node-75c85bcc94-9xvxk -n default:
Name:             hello-node-75c85bcc94-9xvxk
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-992924/192.168.49.2
Start Time:       Mon, 29 Sep 2025 10:28:14 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l2r9v (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-l2r9v:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9xvxk to functional-992924
Normal   Pulling    7m3s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m3s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-992924 logs hello-node-75c85bcc94-9xvxk -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-992924 logs hello-node-75c85bcc94-9xvxk -n default: exit status 1 (59.720279ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-9xvxk" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-992924 logs hello-node-75c85bcc94-9xvxk -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992924 service --namespace=default --https --url hello-node: exit status 115 (511.498281ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30332
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-992924 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992924 service hello-node --url --format={{.IP}}: exit status 115 (512.070238ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-992924 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992924 service hello-node --url: exit status 115 (508.872291ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30332
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-992924 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30332
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    

Test pass (298/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.94
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.2
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.34.0/json-events 5.16
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.21
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.14
21 TestBinaryMirror 0.79
22 TestOffline 58.43
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 143.91
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 8.48
35 TestAddons/parallel/Registry 15.2
36 TestAddons/parallel/RegistryCreds 0.59
38 TestAddons/parallel/InspektorGadget 5.26
39 TestAddons/parallel/MetricsServer 5.67
41 TestAddons/parallel/CSI 48.96
42 TestAddons/parallel/Headlamp 18.37
43 TestAddons/parallel/CloudSpanner 5.52
44 TestAddons/parallel/LocalPath 50.54
45 TestAddons/parallel/NvidiaDevicePlugin 5.48
46 TestAddons/parallel/Yakd 10.65
47 TestAddons/parallel/AmdGpuDevicePlugin 6.47
48 TestAddons/StoppedEnableDisable 16.4
49 TestCertOptions 25
50 TestCertExpiration 216.23
52 TestForceSystemdFlag 27.47
53 TestForceSystemdEnv 25.4
55 TestKVMDriverInstallOrUpdate 0.54
59 TestErrorSpam/setup 21.27
60 TestErrorSpam/start 0.61
61 TestErrorSpam/status 0.88
62 TestErrorSpam/pause 1.42
63 TestErrorSpam/unpause 1.48
64 TestErrorSpam/stop 2.48
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 68.46
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.41
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.12
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.72
76 TestFunctional/serial/CacheCmd/cache/add_local 0.96
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
81 TestFunctional/serial/CacheCmd/cache/delete 0.09
82 TestFunctional/serial/MinikubeKubectlCmd 0.1
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 41.7
85 TestFunctional/serial/ComponentHealth 0.06
86 TestFunctional/serial/LogsCmd 1.36
87 TestFunctional/serial/LogsFileCmd 1.37
88 TestFunctional/serial/InvalidService 4.03
90 TestFunctional/parallel/ConfigCmd 0.34
91 TestFunctional/parallel/DashboardCmd 5.09
92 TestFunctional/parallel/DryRun 0.34
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.88
99 TestFunctional/parallel/AddonsCmd 0.14
100 TestFunctional/parallel/PersistentVolumeClaim 24.71
102 TestFunctional/parallel/SSHCmd 0.58
103 TestFunctional/parallel/CpCmd 1.75
104 TestFunctional/parallel/MySQL 17
105 TestFunctional/parallel/FileSync 0.25
106 TestFunctional/parallel/CertSync 1.51
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
114 TestFunctional/parallel/License 0.25
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.54
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
120 TestFunctional/parallel/ImageCommands/ImageListYaml 1.85
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.68
122 TestFunctional/parallel/ImageCommands/Setup 0.44
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.31
127 TestFunctional/parallel/ProfileCmd/profile_list 0.41
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.22
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.31
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/MountCmd/any-port 6.61
146 TestFunctional/parallel/MountCmd/specific-port 1.51
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.84
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
151 TestFunctional/parallel/ServiceCmd/List 1.68
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.67
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 144.31
164 TestMultiControlPlane/serial/DeployApp 5.87
165 TestMultiControlPlane/serial/PingHostFromPods 1.04
166 TestMultiControlPlane/serial/AddWorkerNode 23.84
167 TestMultiControlPlane/serial/NodeLabels 0.06
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
169 TestMultiControlPlane/serial/CopyFile 16.04
170 TestMultiControlPlane/serial/StopSecondaryNode 14
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
172 TestMultiControlPlane/serial/RestartSecondaryNode 9.51
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.89
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 106.01
175 TestMultiControlPlane/serial/DeleteSecondaryNode 11.31
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
177 TestMultiControlPlane/serial/StopCluster 42.94
178 TestMultiControlPlane/serial/RestartCluster 54.43
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
180 TestMultiControlPlane/serial/AddSecondaryNode 71.19
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
185 TestJSONOutput/start/Command 68.12
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.64
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.59
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.95
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.2
210 TestKicCustomNetwork/create_custom_network 28.27
211 TestKicCustomNetwork/use_default_bridge_network 21.97
212 TestKicExistingNetwork 23.41
213 TestKicCustomSubnet 22.25
214 TestKicStaticIP 24.18
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 48.18
219 TestMountStart/serial/StartWithMountFirst 5.46
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 5.24
222 TestMountStart/serial/VerifyMountSecond 0.24
223 TestMountStart/serial/DeleteFirst 1.64
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.18
226 TestMountStart/serial/RestartStopped 7.36
227 TestMountStart/serial/VerifyMountPostStop 0.25
230 TestMultiNode/serial/FreshStart2Nodes 93.97
231 TestMultiNode/serial/DeployApp2Nodes 4.69
232 TestMultiNode/serial/PingHostFrom2Pods 0.72
233 TestMultiNode/serial/AddNode 53.9
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.62
236 TestMultiNode/serial/CopyFile 9.16
237 TestMultiNode/serial/StopNode 2.21
238 TestMultiNode/serial/StartAfterStop 7.04
239 TestMultiNode/serial/RestartKeepsNodes 79.17
240 TestMultiNode/serial/DeleteNode 5.18
241 TestMultiNode/serial/StopMultiNode 28.54
242 TestMultiNode/serial/RestartMultiNode 48.67
243 TestMultiNode/serial/ValidateNameConflict 23.91
248 TestPreload 106.91
250 TestScheduledStopUnix 95.63
253 TestInsufficientStorage 9.71
254 TestRunningBinaryUpgrade 54.01
256 TestKubernetesUpgrade 311.6
257 TestMissingContainerUpgrade 66.73
268 TestStoppedBinaryUpgrade/Setup 0.53
269 TestStoppedBinaryUpgrade/Upgrade 65.84
274 TestNetworkPlugins/group/false 10.94
278 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
280 TestPause/serial/Start 45.75
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
283 TestNoKubernetes/serial/StartWithK8s 25.81
284 TestPause/serial/SecondStartNoReconfiguration 6.41
285 TestNoKubernetes/serial/StartWithStopK8s 23.09
286 TestPause/serial/Pause 0.63
287 TestPause/serial/VerifyStatus 0.3
288 TestPause/serial/Unpause 0.61
289 TestPause/serial/PauseAgain 0.63
290 TestPause/serial/DeletePaused 2.61
291 TestPause/serial/VerifyDeletedResources 16.94
292 TestNoKubernetes/serial/Start 7.27
293 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
294 TestNoKubernetes/serial/ProfileList 1.74
295 TestNoKubernetes/serial/Stop 2.54
296 TestNoKubernetes/serial/StartNoArgs 6.32
297 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
299 TestStartStop/group/old-k8s-version/serial/FirstStart 54.78
301 TestStartStop/group/no-preload/serial/FirstStart 51.41
302 TestStartStop/group/old-k8s-version/serial/DeployApp 8.3
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.86
304 TestStartStop/group/old-k8s-version/serial/Stop 16.08
305 TestStartStop/group/no-preload/serial/DeployApp 9.31
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.79
307 TestStartStop/group/no-preload/serial/Stop 16.33
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
309 TestStartStop/group/old-k8s-version/serial/SecondStart 51.84
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
311 TestStartStop/group/no-preload/serial/SecondStart 43.93
312 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
317 TestStartStop/group/old-k8s-version/serial/Pause 2.8
318 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
319 TestStartStop/group/no-preload/serial/Pause 3.18
321 TestStartStop/group/embed-certs/serial/FirstStart 45.66
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.16
325 TestStartStop/group/newest-cni/serial/FirstStart 28.14
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.73
328 TestStartStop/group/newest-cni/serial/Stop 2.39
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
330 TestStartStop/group/newest-cni/serial/SecondStart 11.84
331 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.31
332 TestStartStop/group/embed-certs/serial/DeployApp 8.32
333 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
336 TestStartStop/group/newest-cni/serial/Pause 2.57
337 TestNetworkPlugins/group/auto/Start 41.01
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.38
339 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.28
340 TestStartStop/group/default-k8s-diff-port/serial/Stop 17.84
341 TestStartStop/group/embed-certs/serial/Stop 18.17
342 TestNetworkPlugins/group/flannel/Start 74.09
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 54.68
345 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
346 TestStartStop/group/embed-certs/serial/SecondStart 47.66
347 TestNetworkPlugins/group/auto/KubeletFlags 0.33
348 TestNetworkPlugins/group/auto/NetCatPod 11.27
349 TestNetworkPlugins/group/auto/DNS 0.14
350 TestNetworkPlugins/group/auto/Localhost 0.12
351 TestNetworkPlugins/group/auto/HairPin 0.12
352 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
353 TestNetworkPlugins/group/enable-default-cni/Start 67.35
354 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
355 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
356 TestNetworkPlugins/group/flannel/ControllerPod 6.01
357 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
358 TestStartStop/group/embed-certs/serial/Pause 2.68
359 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
360 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
361 TestNetworkPlugins/group/flannel/NetCatPod 9.2
362 TestNetworkPlugins/group/bridge/Start 38.67
363 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
364 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.05
365 TestNetworkPlugins/group/flannel/DNS 0.16
366 TestNetworkPlugins/group/calico/Start 47.15
367 TestNetworkPlugins/group/flannel/Localhost 0.14
368 TestNetworkPlugins/group/flannel/HairPin 0.14
369 TestNetworkPlugins/group/kindnet/Start 71.86
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
371 TestNetworkPlugins/group/bridge/NetCatPod 9.21
372 TestNetworkPlugins/group/bridge/DNS 0.17
373 TestNetworkPlugins/group/bridge/Localhost 0.12
374 TestNetworkPlugins/group/bridge/HairPin 0.17
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.22
377 TestNetworkPlugins/group/calico/ControllerPod 6.01
378 TestNetworkPlugins/group/calico/KubeletFlags 0.28
379 TestNetworkPlugins/group/calico/NetCatPod 10.23
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
383 TestNetworkPlugins/group/custom-flannel/Start 47.13
384 TestNetworkPlugins/group/calico/DNS 0.16
385 TestNetworkPlugins/group/calico/Localhost 0.11
386 TestNetworkPlugins/group/calico/HairPin 0.11
387 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
388 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
389 TestNetworkPlugins/group/kindnet/NetCatPod 9.16
390 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
391 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.17
392 TestNetworkPlugins/group/kindnet/DNS 0.12
393 TestNetworkPlugins/group/kindnet/Localhost 0.11
394 TestNetworkPlugins/group/kindnet/HairPin 0.11
395 TestNetworkPlugins/group/custom-flannel/DNS 0.15
396 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
397 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (4.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-177738 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-177738 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.937940466s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 10:19:35.559504    7117 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0929 10:19:35.559606    7117 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-3615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-177738
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-177738: exit status 85 (68.796443ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-177738 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-177738 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:19:30
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:19:30.662859    7129 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:19:30.663092    7129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:30.663101    7129 out.go:374] Setting ErrFile to fd 2...
	I0929 10:19:30.663105    7129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:30.663285    7129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
	W0929 10:19:30.663416    7129 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21657-3615/.minikube/config/config.json: open /home/jenkins/minikube-integration/21657-3615/.minikube/config/config.json: no such file or directory
	I0929 10:19:30.663918    7129 out.go:368] Setting JSON to true
	I0929 10:19:30.664762    7129 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":115,"bootTime":1759141056,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:19:30.664845    7129 start.go:140] virtualization: kvm guest
	I0929 10:19:30.667278    7129 out.go:99] [download-only-177738] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0929 10:19:30.667387    7129 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21657-3615/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 10:19:30.667429    7129 notify.go:220] Checking for updates...
	I0929 10:19:30.668796    7129 out.go:171] MINIKUBE_LOCATION=21657
	I0929 10:19:30.670255    7129 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:19:30.671670    7129 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig
	I0929 10:19:30.673005    7129 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube
	I0929 10:19:30.674201    7129 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 10:19:30.676276    7129 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 10:19:30.676518    7129 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:19:30.699515    7129 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:19:30.699578    7129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:19:31.082489    7129 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 10:19:31.072199346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:19:31.082588    7129 docker.go:318] overlay module found
	I0929 10:19:31.084176    7129 out.go:99] Using the docker driver based on user configuration
	I0929 10:19:31.084204    7129 start.go:304] selected driver: docker
	I0929 10:19:31.084209    7129 start.go:924] validating driver "docker" against <nil>
	I0929 10:19:31.084294    7129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:19:31.141053    7129 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 10:19:31.132256696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:19:31.141195    7129 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:19:31.141662    7129 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 10:19:31.141827    7129 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 10:19:31.143627    7129 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-177738 host does not exist
	  To start a cluster, run: "minikube start -p download-only-177738"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-177738
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-449454 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-449454 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.161428747s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 10:19:41.111378    7117 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0929 10:19:41.111430    7117 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21657-3615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-449454
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-449454: exit status 85 (56.17689ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-177738 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-177738 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ delete  │ -p download-only-177738                                                                                                                                                   │ download-only-177738 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │ 29 Sep 25 10:19 UTC │
	│ start   │ -o=json --download-only -p download-only-449454 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-449454 │ jenkins │ v1.37.0 │ 29 Sep 25 10:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 10:19:35
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 10:19:35.987992    7474 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:19:35.988349    7474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:35.988363    7474 out.go:374] Setting ErrFile to fd 2...
	I0929 10:19:35.988370    7474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:19:35.988904    7474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
	I0929 10:19:35.989699    7474 out.go:368] Setting JSON to true
	I0929 10:19:35.990564    7474 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":120,"bootTime":1759141056,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:19:35.990647    7474 start.go:140] virtualization: kvm guest
	I0929 10:19:35.992204    7474 out.go:99] [download-only-449454] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:19:35.992314    7474 notify.go:220] Checking for updates...
	I0929 10:19:35.993573    7474 out.go:171] MINIKUBE_LOCATION=21657
	I0929 10:19:35.994906    7474 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:19:35.996179    7474 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig
	I0929 10:19:35.997383    7474 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube
	I0929 10:19:35.998622    7474 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 10:19:36.000868    7474 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 10:19:36.001094    7474 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:19:36.023187    7474 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:19:36.023261    7474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:19:36.073925    7474 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-09-29 10:19:36.065151502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:19:36.074024    7474 docker.go:318] overlay module found
	I0929 10:19:36.075790    7474 out.go:99] Using the docker driver based on user configuration
	I0929 10:19:36.075818    7474 start.go:304] selected driver: docker
	I0929 10:19:36.075823    7474 start.go:924] validating driver "docker" against <nil>
	I0929 10:19:36.075923    7474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:19:36.128264    7474 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-09-29 10:19:36.119141387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:19:36.128415    7474 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 10:19:36.128927    7474 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 10:19:36.129062    7474 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 10:19:36.130750    7474 out.go:171] Using Docker driver with root privileges
	I0929 10:19:36.132041    7474 cni.go:84] Creating CNI manager for ""
	I0929 10:19:36.132095    7474 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0929 10:19:36.132105    7474 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 10:19:36.132160    7474 start.go:348] cluster config:
	{Name:download-only-449454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-449454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:19:36.133374    7474 out.go:99] Starting "download-only-449454" primary control-plane node in "download-only-449454" cluster
	I0929 10:19:36.133392    7474 cache.go:123] Beginning downloading kic base image for docker with crio
	I0929 10:19:36.134442    7474 out.go:99] Pulling base image v0.0.48 ...
	I0929 10:19:36.134470    7474 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:19:36.134572    7474 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 10:19:36.151695    7474 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 10:19:36.151934    7474 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 10:19:36.151956    7474 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 10:19:36.151972    7474 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 10:19:36.151983    7474 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 10:19:36.155214    7474 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 10:19:36.155230    7474 cache.go:58] Caching tarball of preloaded images
	I0929 10:19:36.155356    7474 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:19:36.157050    7474 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0929 10:19:36.157068    7474 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 10:19:36.183297    7474 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2ff28357f4fb6607eaee8f503f8804cd -> /home/jenkins/minikube-integration/21657-3615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0929 10:19:39.701996    7474 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 10:19:39.702092    7474 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21657-3615/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 ...
	I0929 10:19:40.491523    7474 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0929 10:19:40.491834    7474 profile.go:143] Saving config to /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/download-only-449454/config.json ...
	I0929 10:19:40.491865    7474 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/download-only-449454/config.json: {Name:mk9fd56eaba344a21a106b8f884ab3a1ab92a1f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 10:19:40.492061    7474 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0929 10:19:40.492242    7474 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21657-3615/.minikube/cache/linux/amd64/v1.34.0/kubectl
	
	
	* The control-plane node download-only-449454 host does not exist
	  To start a cluster, run: "minikube start -p download-only-449454"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-449454
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.14s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-668605 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-668605" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-668605
--- PASS: TestDownloadOnlyKic (1.14s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 10:19:42.888981    7117 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-935346 --alsologtostderr --binary-mirror http://127.0.0.1:44827 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-935346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-935346
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (58.43s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-785193 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-785193 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (56.039373742s)
helpers_test.go:175: Cleaning up "offline-crio-785193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-785193
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-785193: (2.39026114s)
--- PASS: TestOffline (58.43s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-300979
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-300979: exit status 85 (48.373278ms)

                                                
                                                
-- stdout --
	* Profile "addons-300979" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-300979"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-300979
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-300979: exit status 85 (49.249669ms)

                                                
                                                
-- stdout --
	* Profile "addons-300979" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-300979"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (143.91s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-300979 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-300979 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m23.911303033s)
--- PASS: TestAddons/Setup (143.91s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-300979 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-300979 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-300979 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-300979 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2da4a8d0-2c84-4e2f-a258-eab30010de77] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2da4a8d0-2c84-4e2f-a258-eab30010de77] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003232001s
addons_test.go:694: (dbg) Run:  kubectl --context addons-300979 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-300979 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-300979 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.310045ms
I0929 10:22:24.787416    7117 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 10:22:24.787441    7117 kapi.go:107] duration metric: took 3.093921ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-tzw5c" [6783e8bc-6d03-4f51-a028-5692d87c068a] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003363824s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-mc8zr" [d9145d3f-70fc-493e-a443-621a531cf630] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003656071s
addons_test.go:392: (dbg) Run:  kubectl --context addons-300979 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-300979 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-300979 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.347530606s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 ip
2025/09/29 10:22:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.20s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.59s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.761074ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-300979
addons_test.go:332: (dbg) Run:  kubectl --context addons-300979 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.59s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-pvpm8" [f7887910-089b-48ef-8e4f-5ced5a4e2b99] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003746182s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.000134ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-v8zjg" [6d637f4a-557f-4d88-9ab2-01503ec4f7a3] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003463472s
addons_test.go:463: (dbg) Run:  kubectl --context addons-300979 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.67s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 10:22:24.784370    7117 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.102919ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-300979 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-300979 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [86d2df1a-2a63-422a-af31-6924c91a74fc] Pending
helpers_test.go:352: "task-pv-pod" [86d2df1a-2a63-422a-af31-6924c91a74fc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [86d2df1a-2a63-422a-af31-6924c91a74fc] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.002704202s
addons_test.go:572: (dbg) Run:  kubectl --context addons-300979 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-300979 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-300979 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-300979 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-300979 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-300979 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-300979 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [98c9c6e6-1737-49f3-a7a1-8b0f567c269b] Pending
helpers_test.go:352: "task-pv-pod-restore" [98c9c6e6-1737-49f3-a7a1-8b0f567c269b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [98c9c6e6-1737-49f3-a7a1-8b0f567c269b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002501061s
addons_test.go:614: (dbg) Run:  kubectl --context addons-300979 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-300979 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-300979 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-300979 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.63064767s)
--- PASS: TestAddons/parallel/CSI (48.96s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-300979 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-8twnj" [b6d0f3aa-8384-46fa-a5b5-e7e2958cc323] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-8twnj" [b6d0f3aa-8384-46fa-a5b5-e7e2958cc323] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003617955s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-300979 addons disable headlamp --alsologtostderr -v=1: (5.634842962s)
--- PASS: TestAddons/parallel/Headlamp (18.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-bm8tr" [1ae73879-0a15-4d02-94f2-8f8fd6f7c08a] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003474619s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.54s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-300979 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-300979 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-300979 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [84253778-0de0-4ed8-a399-3b4378d8f513] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [84253778-0de0-4ed8-a399-3b4378d8f513] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [84253778-0de0-4ed8-a399-3b4378d8f513] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00340055s
addons_test.go:967: (dbg) Run:  kubectl --context addons-300979 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 ssh "cat /opt/local-path-provisioner/pvc-3b9bdaf6-b097-4b55-ac46-9361c664e909_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-300979 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-300979 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-300979 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.714134587s)
--- PASS: TestAddons/parallel/LocalPath (50.54s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-vsq7c" [f7dcb2e3-2806-4519-a447-fd8f941f56c5] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004959672s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-5ffjd" [1faf9e81-27de-4ee9-b50e-6fcc86a069fe] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.007737993s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-300979 addons disable yakd --alsologtostderr -v=1: (5.642054965s)
--- PASS: TestAddons/parallel/Yakd (10.65s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-jmnzb" [63390556-2b47-43e7-8bd7-11e2b91c7cc7] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003856513s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-300979
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-300979: (16.153333377s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-300979
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-300979
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-300979
--- PASS: TestAddons/StoppedEnableDisable (16.40s)

                                                
                                    
x
+
TestCertOptions (25s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-165749 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-165749 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (21.987815138s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-165749 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-165749 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-165749 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-165749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-165749
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-165749: (2.401317377s)
--- PASS: TestCertOptions (25.00s)

                                                
                                    
x
+
TestCertExpiration (216.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-099623 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-099623 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.272624203s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-099623 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-099623 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.518645343s)
helpers_test.go:175: Cleaning up "cert-expiration-099623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-099623
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-099623: (2.439829588s)
--- PASS: TestCertExpiration (216.23s)

                                                
                                    
x
+
TestForceSystemdFlag (27.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-682869 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0929 11:02:08.239582    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-682869 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.731197523s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-682869 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-682869" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-682869
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-682869: (2.461056685s)
--- PASS: TestForceSystemdFlag (27.47s)

                                                
                                    
x
+
TestForceSystemdEnv (25.4s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-815372 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-815372 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.840831832s)
helpers_test.go:175: Cleaning up "force-systemd-env-815372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-815372
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-815372: (2.559425033s)
--- PASS: TestForceSystemdEnv (25.40s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.54s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0929 11:02:15.097857    7117 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0929 11:02:15.098072    7117 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1077250440/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 11:02:15.131008    7117 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1077250440/001/docker-machine-driver-kvm2 version is 1.1.1
W0929 11:02:15.131054    7117 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0929 11:02:15.131203    7117 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0929 11:02:15.131249    7117 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1077250440/001/docker-machine-driver-kvm2
I0929 11:02:15.495786    7117 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1077250440/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 11:02:15.514580    7117 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1077250440/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.54s)

                                                
                                    
x
+
TestErrorSpam/setup (21.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-709109 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-709109 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-709109 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-709109 --driver=docker  --container-runtime=crio: (21.265721827s)
--- PASS: TestErrorSpam/setup (21.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 start --dry-run
--- PASS: TestErrorSpam/start (0.61s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 pause
--- PASS: TestErrorSpam/pause (1.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (2.48s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 stop: (2.306497623s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-709109 --log_dir /tmp/nospam-709109 stop
--- PASS: TestErrorSpam/stop (2.48s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21657-3615/.minikube/files/etc/test/nested/copy/7117/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992924 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-992924 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m8.454595581s)
--- PASS: TestFunctional/serial/StartWithProxy (68.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 10:27:07.063467    7117 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992924 --alsologtostderr -v=8
E0929 10:27:08.239648    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:27:08.245977    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:27:08.257345    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:27:08.278675    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:27:08.320059    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:27:08.401464    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:27:08.562970    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:27:08.885039    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:27:09.526971    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:27:10.809052    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:27:13.371178    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-992924 --alsologtostderr -v=8: (6.412837497s)
functional_test.go:678: soft start took 6.413557624s for "functional-992924" cluster.
I0929 10:27:13.476660    7117 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (6.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-992924 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-992924 /tmp/TestFunctionalserialCacheCmdcacheadd_local1582328865/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 cache add minikube-local-cache-test:functional-992924
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 cache delete minikube-local-cache-test:functional-992924
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-992924
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992924 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (269.048907ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 cache reload
E0929 10:27:18.492933    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 kubectl -- --context functional-992924 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-992924 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.7s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992924 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0929 10:27:28.734335    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:27:49.216254    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-992924 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.698263736s)
functional_test.go:776: restart took 41.698406049s for "functional-992924" cluster.
I0929 10:28:01.372724    7117 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (41.70s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-992924 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-992924 logs: (1.358586204s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 logs --file /tmp/TestFunctionalserialLogsFileCmd1980739794/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-992924 logs --file /tmp/TestFunctionalserialLogsFileCmd1980739794/001/logs.txt: (1.372946873s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-992924 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-992924
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-992924: exit status 115 (324.375332ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32412 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-992924 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992924 config get cpus: exit status 14 (58.246242ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992924 config get cpus: exit status 14 (50.331843ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-992924 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-992924 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 49061: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.09s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992924 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-992924 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (146.273617ms)

                                                
                                                
-- stdout --
	* [functional-992924] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:28:30.475497   48675 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:28:30.475591   48675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:28:30.475603   48675 out.go:374] Setting ErrFile to fd 2...
	I0929 10:28:30.475609   48675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:28:30.475823   48675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
	I0929 10:28:30.476314   48675 out.go:368] Setting JSON to false
	I0929 10:28:30.477304   48675 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":654,"bootTime":1759141056,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:28:30.477381   48675 start.go:140] virtualization: kvm guest
	I0929 10:28:30.479300   48675 out.go:179] * [functional-992924] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 10:28:30.480628   48675 notify.go:220] Checking for updates...
	I0929 10:28:30.480696   48675 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:28:30.482196   48675 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:28:30.483697   48675 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig
	I0929 10:28:30.485035   48675 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube
	I0929 10:28:30.489388   48675 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:28:30.490638   48675 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:28:30.492216   48675 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:28:30.492716   48675 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:28:30.515794   48675 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:28:30.515902   48675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:28:30.568344   48675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:28:30.559003772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:28:30.568455   48675 docker.go:318] overlay module found
	I0929 10:28:30.570205   48675 out.go:179] * Using the docker driver based on existing profile
	I0929 10:28:30.571490   48675 start.go:304] selected driver: docker
	I0929 10:28:30.571502   48675 start.go:924] validating driver "docker" against &{Name:functional-992924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-992924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:28:30.571594   48675 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:28:30.573372   48675 out.go:203] 
	W0929 10:28:30.574517   48675 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 10:28:30.575700   48675 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992924 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-992924 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-992924 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (154.782662ms)

                                                
                                                
-- stdout --
	* [functional-992924] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:28:17.723742   45031 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:28:17.723841   45031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:28:17.723849   45031 out.go:374] Setting ErrFile to fd 2...
	I0929 10:28:17.723853   45031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:28:17.724150   45031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
	I0929 10:28:17.724552   45031 out.go:368] Setting JSON to false
	I0929 10:28:17.725440   45031 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":642,"bootTime":1759141056,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 10:28:17.725532   45031 start.go:140] virtualization: kvm guest
	I0929 10:28:17.728720   45031 out.go:179] * [functional-992924] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 10:28:17.730372   45031 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 10:28:17.730430   45031 notify.go:220] Checking for updates...
	I0929 10:28:17.732602   45031 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 10:28:17.733830   45031 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig
	I0929 10:28:17.734996   45031 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube
	I0929 10:28:17.736104   45031 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 10:28:17.737183   45031 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 10:28:17.738846   45031 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:28:17.739452   45031 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 10:28:17.763533   45031 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 10:28:17.763611   45031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:28:17.821792   45031 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 10:28:17.811406228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:28:17.821976   45031 docker.go:318] overlay module found
	I0929 10:28:17.823720   45031 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 10:28:17.825077   45031 start.go:304] selected driver: docker
	I0929 10:28:17.825093   45031 start.go:924] validating driver "docker" against &{Name:functional-992924 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-992924 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 10:28:17.825196   45031 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 10:28:17.826778   45031 out.go:203] 
	W0929 10:28:17.828248   45031 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 10:28:17.829452   45031 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [004e30f6-0cdb-44a4-922c-a6881163cb98] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003696522s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-992924 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-992924 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-992924 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-992924 apply -f testdata/storage-provisioner/pod.yaml
I0929 10:28:15.151177    7117 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8885ec75-e606-4f59-a249-cc4d8e694f4c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8885ec75-e606-4f59-a249-cc4d8e694f4c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00350426s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-992924 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-992924 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-992924 apply -f testdata/storage-provisioner/pod.yaml
I0929 10:28:27.382567    7117 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [810e6929-fb4b-4872-b9b4-f382e7cb12ce] Pending
helpers_test.go:352: "sp-pod" [810e6929-fb4b-4872-b9b4-f382e7cb12ce] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003811509s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-992924 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh -n functional-992924 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 cp functional-992924:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1837202559/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh -n functional-992924 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh -n functional-992924 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-992924 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-w26jj" [70fea463-ac9d-4573-a710-94025381200f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2025/09/29 10:28:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "mysql-5bb876957f-w26jj" [70fea463-ac9d-4573-a710-94025381200f] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.003073932s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-992924 exec mysql-5bb876957f-w26jj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-992924 exec mysql-5bb876957f-w26jj -- mysql -ppassword -e "show databases;": exit status 1 (103.302075ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0929 10:28:49.779535    7117 retry.go:31] will retry after 611.936472ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-992924 exec mysql-5bb876957f-w26jj -- mysql -ppassword -e "show databases;"
E0929 10:29:52.099669    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:32:08.239196    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:32:35.941061    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:37:08.239441    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (17.00s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/7117/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "sudo cat /etc/test/nested/copy/7117/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/7117.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "sudo cat /etc/ssl/certs/7117.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/7117.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "sudo cat /usr/share/ca-certificates/7117.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/71172.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "sudo cat /etc/ssl/certs/71172.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/71172.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "sudo cat /usr/share/ca-certificates/71172.pem"
E0929 10:28:30.178015    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-992924 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992924 ssh "sudo systemctl is-active docker": exit status 1 (298.930399ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992924 ssh "sudo systemctl is-active containerd": exit status 1 (289.521669ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992924 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/kicbase/echo-server           │ functional-992924  │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-992924  │ 935ab82d0a300 │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ alpine             │ 4a86014ec6994 │ 53.9MB │
│ docker.io/library/nginx                 │ latest             │ 41f689c209100 │ 197MB  │
│ localhost/my-image                      │ functional-992924  │ eaea234cbb374 │ 1.47MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992924 image ls --format table --alsologtostderr:
I0929 10:28:45.248525   50956 out.go:360] Setting OutFile to fd 1 ...
I0929 10:28:45.248841   50956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:28:45.248854   50956 out.go:374] Setting ErrFile to fd 2...
I0929 10:28:45.248859   50956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:28:45.249117   50956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
I0929 10:28:45.249784   50956 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:28:45.249938   50956 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:28:45.250361   50956 cli_runner.go:164] Run: docker container inspect functional-992924 --format={{.State.Status}}
I0929 10:28:45.268529   50956 ssh_runner.go:195] Run: systemctl --version
I0929 10:28:45.268582   50956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992924
I0929 10:28:45.285502   50956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/functional-992924/id_rsa Username:docker}
I0929 10:28:45.377492   50956 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992924 image ls --format json --alsologtostderr:
[{"id":"eaea234cbb374aeec81a2f7aa89aa19e460dfe0a0b7fd0b05e37de229588d789","repoDigests":["localhost/my-image@sha256:33da4f54f6d10b9f72aec7d95f5af4229fb8b727d469e24e02b2b71b0e2d2786"],"repoTags":["localhost/my-image:functional-992924"],"size":"1468194"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"760
04183"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e18f266bb51a1cb33c1cec96888704e470697ae6c70c65f52cf1d433267da3d8","repoDigests":["docker.io/library/11a7e437b7a6944ba238ba45bdfc0462e4249eb87caf9a81a1fabd8718f4a9ca-tmp@sha256:f4b941bdacd64f48aea744dce5ba5762c1f65a2c548f65a08f07b7e444ec7399"],"repoTags":[],"size":"1465612"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sh
a256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"935ab82d0a300d82caa94dcb55a94fd74c252dba3e6ff56016785cbbcf6c6519","repoDigests":["localhost/minikube-local-cache-test@sha256:a7124f2db8eb8bd1a3c657c217c71316b110543321f957b481fcccca063e2ec3"],"repoTags":["localhost/minikube-local-cache-test:functional-992924"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:3
64da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.
io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f
2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53949946"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d
5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-992924"],"size":"4943877"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d1
66501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81","repoDigests":["docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285","docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e"],"repoTags":["docker.io/library/nginx:latest"],"size":"196550530"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992924 image ls --format json --alsologtostderr:
I0929 10:28:45.029076   50899 out.go:360] Setting OutFile to fd 1 ...
I0929 10:28:45.029183   50899 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:28:45.029194   50899 out.go:374] Setting ErrFile to fd 2...
I0929 10:28:45.029201   50899 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:28:45.029383   50899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
I0929 10:28:45.029972   50899 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:28:45.030091   50899 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:28:45.030487   50899 cli_runner.go:164] Run: docker container inspect functional-992924 --format={{.State.Status}}
I0929 10:28:45.048269   50899 ssh_runner.go:195] Run: systemctl --version
I0929 10:28:45.048310   50899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992924
I0929 10:28:45.065347   50899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/functional-992924/id_rsa Username:docker}
I0929 10:28:45.157527   50899 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-992924 image ls --format yaml --alsologtostderr: (1.854217882s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992924 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a
repoTags:
- docker.io/library/nginx:alpine
size: "53949946"
- id: 41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81
repoDigests:
- docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285
- docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e
repoTags:
- docker.io/library/nginx:latest
size: "196550530"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-992924
size: "4943877"
- id: 935ab82d0a300d82caa94dcb55a94fd74c252dba3e6ff56016785cbbcf6c6519
repoDigests:
- localhost/minikube-local-cache-test@sha256:a7124f2db8eb8bd1a3c657c217c71316b110543321f957b481fcccca063e2ec3
repoTags:
- localhost/minikube-local-cache-test:functional-992924
size: "3330"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992924 image ls --format yaml --alsologtostderr:
I0929 10:28:40.508845   50250 out.go:360] Setting OutFile to fd 1 ...
I0929 10:28:40.509191   50250 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:28:40.509202   50250 out.go:374] Setting ErrFile to fd 2...
I0929 10:28:40.509208   50250 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:28:40.509490   50250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
I0929 10:28:40.510335   50250 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:28:40.510466   50250 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:28:40.511007   50250 cli_runner.go:164] Run: docker container inspect functional-992924 --format={{.State.Status}}
I0929 10:28:40.532161   50250 ssh_runner.go:195] Run: systemctl --version
I0929 10:28:40.532209   50250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992924
I0929 10:28:40.551044   50250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/functional-992924/id_rsa Username:docker}
I0929 10:28:40.644849   50250 ssh_runner.go:195] Run: sudo crictl images --output json
I0929 10:28:42.304533   50250 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.659653647s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992924 ssh pgrep buildkitd: exit status 1 (259.719805ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image build -t localhost/my-image:functional-992924 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-992924 image build -t localhost/my-image:functional-992924 testdata/build --alsologtostderr: (2.203281493s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-992924 image build -t localhost/my-image:functional-992924 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e18f266bb51
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-992924
--> eaea234cbb3
Successfully tagged localhost/my-image:functional-992924
eaea234cbb374aeec81a2f7aa89aa19e460dfe0a0b7fd0b05e37de229588d789
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-992924 image build -t localhost/my-image:functional-992924 testdata/build --alsologtostderr:
I0929 10:28:42.616282   50492 out.go:360] Setting OutFile to fd 1 ...
I0929 10:28:42.616404   50492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:28:42.616413   50492 out.go:374] Setting ErrFile to fd 2...
I0929 10:28:42.616416   50492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 10:28:42.616598   50492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
I0929 10:28:42.617212   50492 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:28:42.617810   50492 config.go:182] Loaded profile config "functional-992924": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0929 10:28:42.618228   50492 cli_runner.go:164] Run: docker container inspect functional-992924 --format={{.State.Status}}
I0929 10:28:42.635825   50492 ssh_runner.go:195] Run: systemctl --version
I0929 10:28:42.635866   50492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992924
I0929 10:28:42.652469   50492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/functional-992924/id_rsa Username:docker}
I0929 10:28:42.744337   50492 build_images.go:161] Building image from path: /tmp/build.2534975860.tar
I0929 10:28:42.744406   50492 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 10:28:42.753536   50492 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2534975860.tar
I0929 10:28:42.757198   50492 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2534975860.tar: stat -c "%s %y" /var/lib/minikube/build/build.2534975860.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2534975860.tar': No such file or directory
I0929 10:28:42.757234   50492 ssh_runner.go:362] scp /tmp/build.2534975860.tar --> /var/lib/minikube/build/build.2534975860.tar (3072 bytes)
I0929 10:28:42.783103   50492 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2534975860
I0929 10:28:42.792043   50492 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2534975860 -xf /var/lib/minikube/build/build.2534975860.tar
I0929 10:28:42.801259   50492 crio.go:315] Building image: /var/lib/minikube/build/build.2534975860
I0929 10:28:42.801324   50492 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-992924 /var/lib/minikube/build/build.2534975860 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0929 10:28:44.750515   50492 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-992924 /var/lib/minikube/build/build.2534975860 --cgroup-manager=cgroupfs: (1.94916491s)
I0929 10:28:44.750590   50492 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2534975860
I0929 10:28:44.760052   50492 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2534975860.tar
I0929 10:28:44.768572   50492 build_images.go:217] Built localhost/my-image:functional-992924 from /tmp/build.2534975860.tar
I0929 10:28:44.768605   50492 build_images.go:133] succeeded building to: functional-992924
I0929 10:28:44.768610   50492 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-992924
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-992924 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-992924 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-992924 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-992924 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 43302: os: process already finished
helpers_test.go:525: unable to kill pid 43026: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image load --daemon kicbase/echo-server:functional-992924 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-992924 image load --daemon kicbase/echo-server:functional-992924 --alsologtostderr: (1.078144735s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "351.39537ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "58.043029ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-992924 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-992924 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [71eb471f-8528-4f38-8fdd-824cde753e15] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [71eb471f-8528-4f38-8fdd-824cde753e15] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003728851s
I0929 10:28:17.514444    7117 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "341.226ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "54.8458ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image load --daemon kicbase/echo-server:functional-992924 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-992924 image load --daemon kicbase/echo-server:functional-992924 --alsologtostderr: (1.091322543s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-992924
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image load --daemon kicbase/echo-server:functional-992924 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image save kicbase/echo-server:functional-992924 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image rm kicbase/echo-server:functional-992924 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-992924
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 image save --daemon kicbase/echo-server:functional-992924 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-992924
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-992924 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.238.124 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-992924 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992924 /tmp/TestFunctionalparallelMountCmdany-port1493656164/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759141697834547369" to /tmp/TestFunctionalparallelMountCmdany-port1493656164/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759141697834547369" to /tmp/TestFunctionalparallelMountCmdany-port1493656164/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759141697834547369" to /tmp/TestFunctionalparallelMountCmdany-port1493656164/001/test-1759141697834547369
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992924 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (285.213539ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:28:18.120100    7117 retry.go:31] will retry after 520.856172ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 10:28 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 10:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 10:28 test-1759141697834547369
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh cat /mount-9p/test-1759141697834547369
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-992924 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [fc4b1cb4-d2f4-4bf3-a59d-122a4372ba2c] Pending
helpers_test.go:352: "busybox-mount" [fc4b1cb4-d2f4-4bf3-a59d-122a4372ba2c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [fc4b1cb4-d2f4-4bf3-a59d-122a4372ba2c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [fc4b1cb4-d2f4-4bf3-a59d-122a4372ba2c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00272103s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-992924 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992924 /tmp/TestFunctionalparallelMountCmdany-port1493656164/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992924 /tmp/TestFunctionalparallelMountCmdspecific-port3664043712/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992924 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (257.448416ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:28:24.698018    7117 retry.go:31] will retry after 300.986311ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992924 /tmp/TestFunctionalparallelMountCmdspecific-port3664043712/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992924 ssh "sudo umount -f /mount-9p": exit status 1 (247.472386ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-992924 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992924 /tmp/TestFunctionalparallelMountCmdspecific-port3664043712/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992924 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3411483591/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992924 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3411483591/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-992924 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3411483591/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-992924 ssh "findmnt -T" /mount1: exit status 1 (298.021185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 10:28:26.252691    7117 retry.go:31] will retry after 726.405249ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-992924 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992924 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3411483591/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992924 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3411483591/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-992924 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3411483591/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-992924 service list: (1.680874362s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-992924 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-992924 service list -o json: (1.674790234s)
functional_test.go:1504: Took "1.674879557s" to run "out/minikube-linux-amd64 -p functional-992924 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-992924
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-992924
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-992924
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (144.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-925879 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m23.636875617s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (144.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-925879 kubectl -- rollout status deployment/busybox: (3.916518717s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-6jw59 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-hcb8g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-ls6sb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-6jw59 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-hcb8g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-ls6sb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-6jw59 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-hcb8g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-ls6sb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-6jw59 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-6jw59 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-hcb8g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-hcb8g -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-ls6sb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 kubectl -- exec busybox-7b57f96db7-ls6sb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-925879 node add --alsologtostderr -v 5: (23.00068827s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-925879 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp testdata/cp-test.txt ha-925879:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile387654603/001/cp-test_ha-925879.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879:/home/docker/cp-test.txt ha-925879-m02:/home/docker/cp-test_ha-925879_ha-925879-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m02 "sudo cat /home/docker/cp-test_ha-925879_ha-925879-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879:/home/docker/cp-test.txt ha-925879-m03:/home/docker/cp-test_ha-925879_ha-925879-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m03 "sudo cat /home/docker/cp-test_ha-925879_ha-925879-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879:/home/docker/cp-test.txt ha-925879-m04:/home/docker/cp-test_ha-925879_ha-925879-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m04 "sudo cat /home/docker/cp-test_ha-925879_ha-925879-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp testdata/cp-test.txt ha-925879-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile387654603/001/cp-test_ha-925879-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879-m02:/home/docker/cp-test.txt ha-925879:/home/docker/cp-test_ha-925879-m02_ha-925879.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879 "sudo cat /home/docker/cp-test_ha-925879-m02_ha-925879.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879-m02:/home/docker/cp-test.txt ha-925879-m03:/home/docker/cp-test_ha-925879-m02_ha-925879-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m03 "sudo cat /home/docker/cp-test_ha-925879-m02_ha-925879-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879-m02:/home/docker/cp-test.txt ha-925879-m04:/home/docker/cp-test_ha-925879-m02_ha-925879-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m04 "sudo cat /home/docker/cp-test_ha-925879-m02_ha-925879-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp testdata/cp-test.txt ha-925879-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile387654603/001/cp-test_ha-925879-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879-m03:/home/docker/cp-test.txt ha-925879:/home/docker/cp-test_ha-925879-m03_ha-925879.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879 "sudo cat /home/docker/cp-test_ha-925879-m03_ha-925879.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879-m03:/home/docker/cp-test.txt ha-925879-m02:/home/docker/cp-test_ha-925879-m03_ha-925879-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m02 "sudo cat /home/docker/cp-test_ha-925879-m03_ha-925879-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879-m03:/home/docker/cp-test.txt ha-925879-m04:/home/docker/cp-test_ha-925879-m03_ha-925879-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m04 "sudo cat /home/docker/cp-test_ha-925879-m03_ha-925879-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp testdata/cp-test.txt ha-925879-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile387654603/001/cp-test_ha-925879-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879-m04:/home/docker/cp-test.txt ha-925879:/home/docker/cp-test_ha-925879-m04_ha-925879.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879 "sudo cat /home/docker/cp-test_ha-925879-m04_ha-925879.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879-m04:/home/docker/cp-test.txt ha-925879-m02:/home/docker/cp-test_ha-925879-m04_ha-925879-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m02 "sudo cat /home/docker/cp-test_ha-925879-m04_ha-925879-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 cp ha-925879-m04:/home/docker/cp-test.txt ha-925879-m03:/home/docker/cp-test_ha-925879-m04_ha-925879-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 ssh -n ha-925879-m03 "sudo cat /home/docker/cp-test_ha-925879-m04_ha-925879-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-925879 node stop m02 --alsologtostderr -v 5: (13.343129754s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925879 status --alsologtostderr -v 5: exit status 7 (659.281066ms)

                                                
                                                
-- stdout --
	ha-925879
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925879-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-925879-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-925879-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:41:51.084346   76335 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:41:51.084629   76335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:41:51.084639   76335 out.go:374] Setting ErrFile to fd 2...
	I0929 10:41:51.084643   76335 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:41:51.084833   76335 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
	I0929 10:41:51.085040   76335 out.go:368] Setting JSON to false
	I0929 10:41:51.085070   76335 mustload.go:65] Loading cluster: ha-925879
	I0929 10:41:51.085132   76335 notify.go:220] Checking for updates...
	I0929 10:41:51.085418   76335 config.go:182] Loaded profile config "ha-925879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:41:51.085435   76335 status.go:174] checking status of ha-925879 ...
	I0929 10:41:51.085908   76335 cli_runner.go:164] Run: docker container inspect ha-925879 --format={{.State.Status}}
	I0929 10:41:51.105113   76335 status.go:371] ha-925879 host status = "Running" (err=<nil>)
	I0929 10:41:51.105137   76335 host.go:66] Checking if "ha-925879" exists ...
	I0929 10:41:51.105449   76335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-925879
	I0929 10:41:51.122950   76335 host.go:66] Checking if "ha-925879" exists ...
	I0929 10:41:51.123183   76335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:41:51.123214   76335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-925879
	I0929 10:41:51.140534   76335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/ha-925879/id_rsa Username:docker}
	I0929 10:41:51.233256   76335 ssh_runner.go:195] Run: systemctl --version
	I0929 10:41:51.237559   76335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:41:51.248891   76335 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:41:51.304574   76335 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 10:41:51.295143898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:41:51.305159   76335 kubeconfig.go:125] found "ha-925879" server: "https://192.168.49.254:8443"
	I0929 10:41:51.305193   76335 api_server.go:166] Checking apiserver status ...
	I0929 10:41:51.305233   76335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:41:51.316862   76335 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1431/cgroup
	W0929 10:41:51.326469   76335 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1431/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 10:41:51.326520   76335 ssh_runner.go:195] Run: ls
	I0929 10:41:51.329861   76335 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 10:41:51.333837   76335 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 10:41:51.333860   76335 status.go:463] ha-925879 apiserver status = Running (err=<nil>)
	I0929 10:41:51.333870   76335 status.go:176] ha-925879 status: &{Name:ha-925879 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:41:51.333910   76335 status.go:174] checking status of ha-925879-m02 ...
	I0929 10:41:51.334164   76335 cli_runner.go:164] Run: docker container inspect ha-925879-m02 --format={{.State.Status}}
	I0929 10:41:51.352356   76335 status.go:371] ha-925879-m02 host status = "Stopped" (err=<nil>)
	I0929 10:41:51.352376   76335 status.go:384] host is not running, skipping remaining checks
	I0929 10:41:51.352381   76335 status.go:176] ha-925879-m02 status: &{Name:ha-925879-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:41:51.352407   76335 status.go:174] checking status of ha-925879-m03 ...
	I0929 10:41:51.352720   76335 cli_runner.go:164] Run: docker container inspect ha-925879-m03 --format={{.State.Status}}
	I0929 10:41:51.369824   76335 status.go:371] ha-925879-m03 host status = "Running" (err=<nil>)
	I0929 10:41:51.369846   76335 host.go:66] Checking if "ha-925879-m03" exists ...
	I0929 10:41:51.370205   76335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-925879-m03
	I0929 10:41:51.386674   76335 host.go:66] Checking if "ha-925879-m03" exists ...
	I0929 10:41:51.386938   76335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:41:51.386975   76335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-925879-m03
	I0929 10:41:51.404316   76335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/ha-925879-m03/id_rsa Username:docker}
	I0929 10:41:51.496976   76335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:41:51.508633   76335 kubeconfig.go:125] found "ha-925879" server: "https://192.168.49.254:8443"
	I0929 10:41:51.508663   76335 api_server.go:166] Checking apiserver status ...
	I0929 10:41:51.508709   76335 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:41:51.519319   76335 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1358/cgroup
	W0929 10:41:51.530327   76335 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1358/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 10:41:51.530368   76335 ssh_runner.go:195] Run: ls
	I0929 10:41:51.534116   76335 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 10:41:51.538206   76335 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 10:41:51.538237   76335 status.go:463] ha-925879-m03 apiserver status = Running (err=<nil>)
	I0929 10:41:51.538248   76335 status.go:176] ha-925879-m03 status: &{Name:ha-925879-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:41:51.538267   76335 status.go:174] checking status of ha-925879-m04 ...
	I0929 10:41:51.538494   76335 cli_runner.go:164] Run: docker container inspect ha-925879-m04 --format={{.State.Status}}
	I0929 10:41:51.557397   76335 status.go:371] ha-925879-m04 host status = "Running" (err=<nil>)
	I0929 10:41:51.557421   76335 host.go:66] Checking if "ha-925879-m04" exists ...
	I0929 10:41:51.557653   76335 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-925879-m04
	I0929 10:41:51.575450   76335 host.go:66] Checking if "ha-925879-m04" exists ...
	I0929 10:41:51.575678   76335 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:41:51.575709   76335 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-925879-m04
	I0929 10:41:51.592831   76335 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/ha-925879-m04/id_rsa Username:docker}
	I0929 10:41:51.686288   76335 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:41:51.698598   76335 status.go:176] ha-925879-m04 status: &{Name:ha-925879-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-925879 node start m02 --alsologtostderr -v 5: (8.584990979s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 stop --alsologtostderr -v 5
E0929 10:42:08.241422    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-925879 stop --alsologtostderr -v 5: (37.672717333s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 start --wait true --alsologtostderr -v 5
E0929 10:43:08.797058    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:43:08.803470    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:43:08.814768    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:43:08.836121    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:43:08.877516    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:43:08.958965    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:43:09.120519    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:43:09.442222    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:43:10.084230    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:43:11.366092    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:43:13.927897    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:43:19.049618    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:43:29.291546    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 10:43:31.303117    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-925879 start --wait true --alsologtostderr -v 5: (1m8.236369543s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 node delete m03 --alsologtostderr -v 5
E0929 10:43:49.773025    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-925879 node delete m03 --alsologtostderr -v 5: (10.490050152s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (42.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 stop --alsologtostderr -v 5
E0929 10:44:30.734780    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-925879 stop --alsologtostderr -v 5: (42.837649278s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-925879 status --alsologtostderr -v 5: exit status 7 (99.139359ms)

                                                
                                                
-- stdout --
	ha-925879
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-925879-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-925879-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:44:43.659447   92747 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:44:43.659546   92747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:44:43.659554   92747 out.go:374] Setting ErrFile to fd 2...
	I0929 10:44:43.659558   92747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:44:43.659728   92747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
	I0929 10:44:43.659892   92747 out.go:368] Setting JSON to false
	I0929 10:44:43.659919   92747 mustload.go:65] Loading cluster: ha-925879
	I0929 10:44:43.659982   92747 notify.go:220] Checking for updates...
	I0929 10:44:43.660275   92747 config.go:182] Loaded profile config "ha-925879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:44:43.660291   92747 status.go:174] checking status of ha-925879 ...
	I0929 10:44:43.660689   92747 cli_runner.go:164] Run: docker container inspect ha-925879 --format={{.State.Status}}
	I0929 10:44:43.679492   92747 status.go:371] ha-925879 host status = "Stopped" (err=<nil>)
	I0929 10:44:43.679521   92747 status.go:384] host is not running, skipping remaining checks
	I0929 10:44:43.679527   92747 status.go:176] ha-925879 status: &{Name:ha-925879 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:44:43.679552   92747 status.go:174] checking status of ha-925879-m02 ...
	I0929 10:44:43.679830   92747 cli_runner.go:164] Run: docker container inspect ha-925879-m02 --format={{.State.Status}}
	I0929 10:44:43.696592   92747 status.go:371] ha-925879-m02 host status = "Stopped" (err=<nil>)
	I0929 10:44:43.696611   92747 status.go:384] host is not running, skipping remaining checks
	I0929 10:44:43.696616   92747 status.go:176] ha-925879-m02 status: &{Name:ha-925879-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:44:43.696650   92747 status.go:174] checking status of ha-925879-m04 ...
	I0929 10:44:43.696908   92747 cli_runner.go:164] Run: docker container inspect ha-925879-m04 --format={{.State.Status}}
	I0929 10:44:43.714606   92747 status.go:371] ha-925879-m04 host status = "Stopped" (err=<nil>)
	I0929 10:44:43.714627   92747 status.go:384] host is not running, skipping remaining checks
	I0929 10:44:43.714635   92747 status.go:176] ha-925879-m04 status: &{Name:ha-925879-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (42.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (54.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-925879 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (53.668242756s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (54.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (71.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 node add --control-plane --alsologtostderr -v 5
E0929 10:45:52.659042    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-925879 node add --control-plane --alsologtostderr -v 5: (1m10.358357134s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-925879 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (71.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-904661 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0929 10:47:08.240016    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-904661 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m8.114884449s)
--- PASS: TestJSONOutput/start/Command (68.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-904661 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-904661 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.95s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-904661 --output=json --user=testUser
E0929 10:48:08.796707    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-904661 --output=json --user=testUser: (5.94658182s)
--- PASS: TestJSONOutput/stop/Command (5.95s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-045559 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-045559 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (66.306009ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d7bfc6cd-2fe1-49b3-8113-052a0af14539","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-045559] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"14db1bb2-9293-4096-8031-5871670e8656","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21657"}}
	{"specversion":"1.0","id":"380dd8d5-ae3c-4f25-9573-1861edce189a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9393d4d1-bb36-489b-bde7-c4d591864343","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig"}}
	{"specversion":"1.0","id":"a6a3a4e6-0473-4a84-9a60-d646cc42630f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube"}}
	{"specversion":"1.0","id":"917dd658-dc11-4e65-a3c3-b4c5a6c47314","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f1da5528-5ff0-499c-8216-10b3c3ade6bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e6d4ba8a-532f-4a4a-b4fc-fc549ce4b48d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-045559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-045559
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.27s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-421529 --network=
E0929 10:48:36.501173    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-421529 --network=: (26.152940877s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-421529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-421529
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-421529: (2.095984584s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.27s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.97s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-620925 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-620925 --network=bridge: (20.048701203s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-620925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-620925
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-620925: (1.900065188s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.97s)

                                                
                                    
x
+
TestKicExistingNetwork (23.41s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0929 10:49:09.075250    7117 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0929 10:49:09.092150    7117 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0929 10:49:09.092218    7117 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0929 10:49:09.092235    7117 cli_runner.go:164] Run: docker network inspect existing-network
W0929 10:49:09.109528    7117 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0929 10:49:09.109561    7117 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0929 10:49:09.109581    7117 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0929 10:49:09.109771    7117 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0929 10:49:09.126305    7117 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f554a76b7a72 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:a2:0e:ef:7b:ea} reservation:<nil>}
I0929 10:49:09.126651    7117 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015ab0b0}
I0929 10:49:09.126677    7117 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0929 10:49:09.126714    7117 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0929 10:49:09.184078    7117 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-099817 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-099817 --network=existing-network: (21.336360172s)
helpers_test.go:175: Cleaning up "existing-network-099817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-099817
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-099817: (1.93178145s)
I0929 10:49:32.469325    7117 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.41s)

                                                
                                    
x
+
TestKicCustomSubnet (22.25s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-970440 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-970440 --subnet=192.168.60.0/24: (20.159476235s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-970440 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-970440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-970440
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-970440: (2.072657015s)
--- PASS: TestKicCustomSubnet (22.25s)

                                                
                                    
x
+
TestKicStaticIP (24.18s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-432486 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-432486 --static-ip=192.168.200.200: (21.951473446s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-432486 ip
helpers_test.go:175: Cleaning up "static-ip-432486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-432486
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-432486: (2.094666275s)
--- PASS: TestKicStaticIP (24.18s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (48.18s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-465901 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-465901 --driver=docker  --container-runtime=crio: (20.16301812s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-479059 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-479059 --driver=docker  --container-runtime=crio: (22.22008034s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-465901
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-479059
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-479059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-479059
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-479059: (2.293549579s)
helpers_test.go:175: Cleaning up "first-465901" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-465901
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-465901: (2.313182992s)
--- PASS: TestMinikubeProfile (48.18s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-245418 --memory=3072 --mount-string /tmp/TestMountStartserial3883416912/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-245418 --memory=3072 --mount-string /tmp/TestMountStartserial3883416912/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.459071509s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-245418 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-262570 --memory=3072 --mount-string /tmp/TestMountStartserial3883416912/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-262570 --memory=3072 --mount-string /tmp/TestMountStartserial3883416912/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.244288787s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-262570 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-245418 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-245418 --alsologtostderr -v=5: (1.637224096s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-262570 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-262570
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-262570: (1.177656446s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-262570
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-262570: (6.359530718s)
--- PASS: TestMountStart/serial/RestartStopped (7.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-262570 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (93.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-102952 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0929 10:52:08.239175    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-102952 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m33.523798177s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (93.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-102952 -- rollout status deployment/busybox: (3.325007453s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- exec busybox-7b57f96db7-kkgs8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- exec busybox-7b57f96db7-x5c9d -- nslookup kubernetes.io
E0929 10:53:08.796504    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- exec busybox-7b57f96db7-kkgs8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- exec busybox-7b57f96db7-x5c9d -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- exec busybox-7b57f96db7-kkgs8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- exec busybox-7b57f96db7-x5c9d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- exec busybox-7b57f96db7-kkgs8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- exec busybox-7b57f96db7-kkgs8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- exec busybox-7b57f96db7-x5c9d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-102952 -- exec busybox-7b57f96db7-x5c9d -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-102952 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-102952 -v=5 --alsologtostderr: (53.297512676s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.90s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-102952 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 cp testdata/cp-test.txt multinode-102952:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 cp multinode-102952:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1676451236/001/cp-test_multinode-102952.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 cp multinode-102952:/home/docker/cp-test.txt multinode-102952-m02:/home/docker/cp-test_multinode-102952_multinode-102952-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952-m02 "sudo cat /home/docker/cp-test_multinode-102952_multinode-102952-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 cp multinode-102952:/home/docker/cp-test.txt multinode-102952-m03:/home/docker/cp-test_multinode-102952_multinode-102952-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952-m03 "sudo cat /home/docker/cp-test_multinode-102952_multinode-102952-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 cp testdata/cp-test.txt multinode-102952-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 cp multinode-102952-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1676451236/001/cp-test_multinode-102952-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 cp multinode-102952-m02:/home/docker/cp-test.txt multinode-102952:/home/docker/cp-test_multinode-102952-m02_multinode-102952.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952 "sudo cat /home/docker/cp-test_multinode-102952-m02_multinode-102952.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 cp multinode-102952-m02:/home/docker/cp-test.txt multinode-102952-m03:/home/docker/cp-test_multinode-102952-m02_multinode-102952-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952-m03 "sudo cat /home/docker/cp-test_multinode-102952-m02_multinode-102952-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 cp testdata/cp-test.txt multinode-102952-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 cp multinode-102952-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1676451236/001/cp-test_multinode-102952-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 cp multinode-102952-m03:/home/docker/cp-test.txt multinode-102952:/home/docker/cp-test_multinode-102952-m03_multinode-102952.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952 "sudo cat /home/docker/cp-test_multinode-102952-m03_multinode-102952.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 cp multinode-102952-m03:/home/docker/cp-test.txt multinode-102952-m02:/home/docker/cp-test_multinode-102952-m03_multinode-102952-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 ssh -n multinode-102952-m02 "sudo cat /home/docker/cp-test_multinode-102952-m03_multinode-102952-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-102952 node stop m03: (1.278504996s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-102952 status: exit status 7 (464.870675ms)

                                                
                                                
-- stdout --
	multinode-102952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-102952-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-102952-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-102952 status --alsologtostderr: exit status 7 (467.353109ms)

                                                
                                                
-- stdout --
	multinode-102952
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-102952-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-102952-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:54:15.730064  155636 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:54:15.730320  155636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:54:15.730330  155636 out.go:374] Setting ErrFile to fd 2...
	I0929 10:54:15.730335  155636 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:54:15.730511  155636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
	I0929 10:54:15.730694  155636 out.go:368] Setting JSON to false
	I0929 10:54:15.730725  155636 mustload.go:65] Loading cluster: multinode-102952
	I0929 10:54:15.730783  155636 notify.go:220] Checking for updates...
	I0929 10:54:15.731252  155636 config.go:182] Loaded profile config "multinode-102952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:54:15.731278  155636 status.go:174] checking status of multinode-102952 ...
	I0929 10:54:15.731775  155636 cli_runner.go:164] Run: docker container inspect multinode-102952 --format={{.State.Status}}
	I0929 10:54:15.752609  155636 status.go:371] multinode-102952 host status = "Running" (err=<nil>)
	I0929 10:54:15.752636  155636 host.go:66] Checking if "multinode-102952" exists ...
	I0929 10:54:15.752901  155636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102952
	I0929 10:54:15.769968  155636 host.go:66] Checking if "multinode-102952" exists ...
	I0929 10:54:15.770207  155636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:54:15.770247  155636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102952
	I0929 10:54:15.786751  155636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/multinode-102952/id_rsa Username:docker}
	I0929 10:54:15.878910  155636 ssh_runner.go:195] Run: systemctl --version
	I0929 10:54:15.883121  155636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:54:15.894092  155636 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 10:54:15.947340  155636 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 10:54:15.937719018 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 10:54:15.947862  155636 kubeconfig.go:125] found "multinode-102952" server: "https://192.168.67.2:8443"
	I0929 10:54:15.947919  155636 api_server.go:166] Checking apiserver status ...
	I0929 10:54:15.947956  155636 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 10:54:15.958975  155636 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	W0929 10:54:15.968534  155636 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 10:54:15.968577  155636 ssh_runner.go:195] Run: ls
	I0929 10:54:15.972008  155636 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0929 10:54:15.975920  155636 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0929 10:54:15.975939  155636 status.go:463] multinode-102952 apiserver status = Running (err=<nil>)
	I0929 10:54:15.975947  155636 status.go:176] multinode-102952 status: &{Name:multinode-102952 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:54:15.975965  155636 status.go:174] checking status of multinode-102952-m02 ...
	I0929 10:54:15.976268  155636 cli_runner.go:164] Run: docker container inspect multinode-102952-m02 --format={{.State.Status}}
	I0929 10:54:15.993353  155636 status.go:371] multinode-102952-m02 host status = "Running" (err=<nil>)
	I0929 10:54:15.993372  155636 host.go:66] Checking if "multinode-102952-m02" exists ...
	I0929 10:54:15.993667  155636 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102952-m02
	I0929 10:54:16.010690  155636 host.go:66] Checking if "multinode-102952-m02" exists ...
	I0929 10:54:16.010945  155636 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 10:54:16.010992  155636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102952-m02
	I0929 10:54:16.027927  155636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21657-3615/.minikube/machines/multinode-102952-m02/id_rsa Username:docker}
	I0929 10:54:16.120765  155636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 10:54:16.132088  155636 status.go:176] multinode-102952-m02 status: &{Name:multinode-102952-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:54:16.132123  155636 status.go:174] checking status of multinode-102952-m03 ...
	I0929 10:54:16.132393  155636 cli_runner.go:164] Run: docker container inspect multinode-102952-m03 --format={{.State.Status}}
	I0929 10:54:16.149529  155636 status.go:371] multinode-102952-m03 host status = "Stopped" (err=<nil>)
	I0929 10:54:16.149550  155636 status.go:384] host is not running, skipping remaining checks
	I0929 10:54:16.149555  155636 status.go:176] multinode-102952-m03 status: &{Name:multinode-102952-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-102952 node start m03 -v=5 --alsologtostderr: (6.378408819s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-102952
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-102952
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-102952: (29.327885422s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-102952 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-102952 --wait=true -v=5 --alsologtostderr: (49.753224451s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-102952
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-102952 node delete m03: (4.617820156s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-102952 stop: (28.369437005s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-102952 status: exit status 7 (82.556493ms)

                                                
                                                
-- stdout --
	multinode-102952
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-102952-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-102952 status --alsologtostderr: exit status 7 (82.659511ms)

                                                
                                                
-- stdout --
	multinode-102952
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-102952-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 10:56:16.042146  165797 out.go:360] Setting OutFile to fd 1 ...
	I0929 10:56:16.042391  165797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:56:16.042399  165797 out.go:374] Setting ErrFile to fd 2...
	I0929 10:56:16.042403  165797 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 10:56:16.042567  165797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
	I0929 10:56:16.042727  165797 out.go:368] Setting JSON to false
	I0929 10:56:16.042755  165797 mustload.go:65] Loading cluster: multinode-102952
	I0929 10:56:16.042879  165797 notify.go:220] Checking for updates...
	I0929 10:56:16.043166  165797 config.go:182] Loaded profile config "multinode-102952": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 10:56:16.043185  165797 status.go:174] checking status of multinode-102952 ...
	I0929 10:56:16.043727  165797 cli_runner.go:164] Run: docker container inspect multinode-102952 --format={{.State.Status}}
	I0929 10:56:16.063227  165797 status.go:371] multinode-102952 host status = "Stopped" (err=<nil>)
	I0929 10:56:16.063267  165797 status.go:384] host is not running, skipping remaining checks
	I0929 10:56:16.063273  165797 status.go:176] multinode-102952 status: &{Name:multinode-102952 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 10:56:16.063318  165797 status.go:174] checking status of multinode-102952-m02 ...
	I0929 10:56:16.063567  165797 cli_runner.go:164] Run: docker container inspect multinode-102952-m02 --format={{.State.Status}}
	I0929 10:56:16.080483  165797 status.go:371] multinode-102952-m02 host status = "Stopped" (err=<nil>)
	I0929 10:56:16.080501  165797 status.go:384] host is not running, skipping remaining checks
	I0929 10:56:16.080506  165797 status.go:176] multinode-102952-m02 status: &{Name:multinode-102952-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-102952 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-102952 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.101301866s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-102952 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.67s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-102952
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-102952-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-102952-m02 --driver=docker  --container-runtime=crio: exit status 14 (60.03238ms)

                                                
                                                
-- stdout --
	* [multinode-102952-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-102952-m02' is duplicated with machine name 'multinode-102952-m02' in profile 'multinode-102952'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-102952-m03 --driver=docker  --container-runtime=crio
E0929 10:57:08.242102    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-102952-m03 --driver=docker  --container-runtime=crio: (21.251683614s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-102952
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-102952: exit status 80 (272.08631ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-102952 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-102952-m03 already exists in multinode-102952-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-102952-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-102952-m03: (2.282500353s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.91s)

                                                
                                    
x
+
TestPreload (106.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-270200 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0929 10:58:08.796836    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-270200 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.099264879s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-270200 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-270200 image pull gcr.io/k8s-minikube/busybox: (2.333968491s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-270200
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-270200: (5.776196195s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-270200 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-270200 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.142068827s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-270200 image list
helpers_test.go:175: Cleaning up "test-preload-270200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-270200
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-270200: (2.342562339s)
--- PASS: TestPreload (106.91s)

                                                
                                    
x
+
TestScheduledStopUnix (95.63s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-180184 --memory=3072 --driver=docker  --container-runtime=crio
E0929 10:59:31.865546    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-180184 --memory=3072 --driver=docker  --container-runtime=crio: (19.875664086s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-180184 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-180184 -n scheduled-stop-180184
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-180184 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 10:59:39.894085    7117 retry.go:31] will retry after 74.504µs: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.895236    7117 retry.go:31] will retry after 158.625µs: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.896365    7117 retry.go:31] will retry after 190.961µs: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.897507    7117 retry.go:31] will retry after 227.792µs: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.898655    7117 retry.go:31] will retry after 379.774µs: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.899817    7117 retry.go:31] will retry after 688.599µs: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.900943    7117 retry.go:31] will retry after 765.424µs: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.902083    7117 retry.go:31] will retry after 2.454016ms: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.905308    7117 retry.go:31] will retry after 3.529083ms: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.909531    7117 retry.go:31] will retry after 4.310922ms: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.914732    7117 retry.go:31] will retry after 4.459482ms: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.919936    7117 retry.go:31] will retry after 12.305562ms: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.933162    7117 retry.go:31] will retry after 17.646237ms: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.951392    7117 retry.go:31] will retry after 21.230847ms: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
I0929 10:59:39.973650    7117 retry.go:31] will retry after 43.475696ms: open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/scheduled-stop-180184/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-180184 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-180184 -n scheduled-stop-180184
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-180184
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-180184 --schedule 15s
E0929 11:00:11.306702    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-180184
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-180184: exit status 7 (63.348386ms)

                                                
                                                
-- stdout --
	scheduled-stop-180184
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-180184 -n scheduled-stop-180184
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-180184 -n scheduled-stop-180184: exit status 7 (63.731443ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-180184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-180184
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-180184: (4.417644615s)
--- PASS: TestScheduledStopUnix (95.63s)

                                                
                                    
x
+
TestInsufficientStorage (9.71s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-479265 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-479265 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.282275485s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"898b8a53-d016-4137-a4f6-4ebe3e999da5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-479265] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc5cd52f-daa7-4028-9e1a-05d67e2db05e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21657"}}
	{"specversion":"1.0","id":"370cf816-61c4-4e13-80ae-0c789b4ebc23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ad41e671-83e3-4d8e-a3d3-15580ca29209","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig"}}
	{"specversion":"1.0","id":"a9076d9d-c54c-462c-8ec2-3c442cbf5bca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube"}}
	{"specversion":"1.0","id":"d7de6b14-248a-48ef-af9b-db92e9764eb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b515b2b0-ca15-4547-af58-4af55c955fd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"51528220-de8b-4a69-bc15-a2284e8e495a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"456e8d55-7019-499e-ab93-f6ac57a6e579","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2026d344-afd9-40b8-850a-bb7d5ffe3e53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d37cb8e-9325-4995-a852-10e9a6075298","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7798e79b-3075-45f7-b061-545c1e54ea1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-479265\" primary control-plane node in \"insufficient-storage-479265\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e89fe487-d3fd-434d-9700-a6378e4e738e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"aaa55690-48a9-4a22-b64d-2cd91db59c34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"560821e9-d28c-4da1-a039-bbb5f8191e24","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-479265 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-479265 --output=json --layout=cluster: exit status 7 (272.70729ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-479265","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-479265","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 11:01:02.788271  188125 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-479265" does not appear in /home/jenkins/minikube-integration/21657-3615/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-479265 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-479265 --output=json --layout=cluster: exit status 7 (263.989952ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-479265","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-479265","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 11:01:03.053309  188229 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-479265" does not appear in /home/jenkins/minikube-integration/21657-3615/kubeconfig
	E0929 11:01:03.063898  188229 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/insufficient-storage-479265/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-479265" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-479265
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-479265: (1.888087979s)
--- PASS: TestInsufficientStorage (9.71s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (54.01s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.245504533 start -p running-upgrade-992107 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.245504533 start -p running-upgrade-992107 --memory=3072 --vm-driver=docker  --container-runtime=crio: (29.268546678s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-992107 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-992107 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.591921599s)
helpers_test.go:175: Cleaning up "running-upgrade-992107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-992107
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-992107: (2.632419855s)
--- PASS: TestRunningBinaryUpgrade (54.01s)

                                                
                                    
x
+
TestKubernetesUpgrade (311.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-804143 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-804143 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.903757968s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-804143
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-804143: (1.970369671s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-804143 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-804143 status --format={{.Host}}: exit status 7 (70.693551ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-804143 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-804143 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.112654574s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-804143 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-804143 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-804143 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (66.688016ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-804143] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-804143
	    minikube start -p kubernetes-upgrade-804143 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8041432 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-804143 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-804143 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-804143 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4.656396494s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-804143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-804143
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-804143: (2.760698795s)
--- PASS: TestKubernetesUpgrade (311.60s)

                                                
                                    
x
+
TestMissingContainerUpgrade (66.73s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1269906970 start -p missing-upgrade-744241 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1269906970 start -p missing-upgrade-744241 --memory=3072 --driver=docker  --container-runtime=crio: (23.110232288s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-744241
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-744241: (1.738723744s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-744241
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-744241 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-744241 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.515038546s)
helpers_test.go:175: Cleaning up "missing-upgrade-744241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-744241
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-744241: (3.907576634s)
--- PASS: TestMissingContainerUpgrade (66.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (65.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1083489541 start -p stopped-upgrade-267564 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1083489541 start -p stopped-upgrade-267564 --memory=3072 --vm-driver=docker  --container-runtime=crio: (48.295743613s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1083489541 -p stopped-upgrade-267564 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1083489541 -p stopped-upgrade-267564 stop: (2.735578813s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-267564 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-267564 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.807910337s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (65.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (10.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-078909 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-078909 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (999.927986ms)

                                                
                                                
-- stdout --
	* [false-078909] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:01:09.019777  190152 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:01:09.020124  190152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:01:09.020134  190152 out.go:374] Setting ErrFile to fd 2...
	I0929 11:01:09.020139  190152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:01:09.020385  190152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21657-3615/.minikube/bin
	I0929 11:01:09.020981  190152 out.go:368] Setting JSON to false
	I0929 11:01:09.022140  190152 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2613,"bootTime":1759141056,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:01:09.022249  190152 start.go:140] virtualization: kvm guest
	I0929 11:01:09.024787  190152 out.go:179] * [false-078909] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:01:09.026845  190152 notify.go:220] Checking for updates...
	I0929 11:01:09.026911  190152 out.go:179]   - MINIKUBE_LOCATION=21657
	I0929 11:01:09.028656  190152 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:01:09.031078  190152 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig
	I0929 11:01:09.032524  190152 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube
	I0929 11:01:09.034543  190152 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:01:09.036016  190152 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:01:09.037955  190152 config.go:182] Loaded profile config "kubernetes-upgrade-804143": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0929 11:01:09.038074  190152 config.go:182] Loaded profile config "offline-crio-785193": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0929 11:01:09.038165  190152 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:01:09.065690  190152 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:01:09.065859  190152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:01:09.150039  190152 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:false NGoroutines:67 SystemTime:2025-09-29 11:01:09.135535775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:01:09.150168  190152 docker.go:318] overlay module found
	I0929 11:01:09.186725  190152 out.go:179] * Using the docker driver based on user configuration
	I0929 11:01:09.389569  190152 start.go:304] selected driver: docker
	I0929 11:01:09.389598  190152 start.go:924] validating driver "docker" against <nil>
	I0929 11:01:09.389615  190152 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:01:09.501843  190152 out.go:203] 
	W0929 11:01:09.657732  190152 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0929 11:01:09.828174  190152 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-078909 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-078909

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-078909

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-078909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-078909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-078909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-078909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-078909

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-078909

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-078909

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-078909

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-078909

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-078909" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-078909" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-078909

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078909"

                                                
                                                
----------------------- debugLogs end: false-078909 [took: 9.597650611s] --------------------------------
helpers_test.go:175: Cleaning up "false-078909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-078909
--- PASS: TestNetworkPlugins/group/false (10.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-267564
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-267564: (1.131652083s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestPause/serial/Start (45.75s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-109793 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-109793 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (45.751406516s)
--- PASS: TestPause/serial/Start (45.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-368992 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-368992 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (75.968677ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-368992] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21657
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21657-3615/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21657-3615/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-368992 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-368992 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.487370915s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-368992 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.81s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.41s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-109793 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-109793 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.403505442s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-368992 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0929 11:03:08.796522    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/functional-992924/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-368992 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.838899084s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-368992 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-368992 status -o json: exit status 2 (296.801569ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-368992","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-368992
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-368992: (1.949859417s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.09s)

                                                
                                    
x
+
TestPause/serial/Pause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-109793 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.63s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-109793 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-109793 --output=json --layout=cluster: exit status 2 (299.654963ms)

                                                
                                                
-- stdout --
	{"Name":"pause-109793","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-109793","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-109793 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.63s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-109793 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.63s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.61s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-109793 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-109793 --alsologtostderr -v=5: (2.607983675s)
--- PASS: TestPause/serial/DeletePaused (2.61s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (16.94s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (16.873511478s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-109793
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-109793: exit status 1 (20.255641ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-109793: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (16.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-368992 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-368992 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.267204672s)
--- PASS: TestNoKubernetes/serial/Start (7.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-368992 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-368992 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.960616ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (1.028993038s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-368992
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-368992: (2.543169841s)
--- PASS: TestNoKubernetes/serial/Stop (2.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-368992 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-368992 --driver=docker  --container-runtime=crio: (6.31634513s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-368992 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-368992 "sudo systemctl is-active --quiet service kubelet": exit status 1 (276.919846ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (54.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-162905 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-162905 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (54.777111718s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (54.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-891708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-891708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (51.414387729s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-162905 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [313ff3be-ce35-4b2f-a088-ed1936d716d6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [313ff3be-ce35-4b2f-a088-ed1936d716d6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003374921s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-162905 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-162905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-162905 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-162905 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-162905 --alsologtostderr -v=3: (16.079930171s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-891708 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [60ad2c0a-aaff-405b-8ddd-df098d3b27d8] Pending
helpers_test.go:352: "busybox" [60ad2c0a-aaff-405b-8ddd-df098d3b27d8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [60ad2c0a-aaff-405b-8ddd-df098d3b27d8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003685019s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-891708 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-891708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-891708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-891708 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-891708 --alsologtostderr -v=3: (16.333564285s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-162905 -n old-k8s-version-162905
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-162905 -n old-k8s-version-162905: exit status 7 (65.632008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-162905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-162905 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-162905 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.517842201s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-162905 -n old-k8s-version-162905
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891708 -n no-preload-891708
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891708 -n no-preload-891708: exit status 7 (80.452429ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-891708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (43.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-891708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-891708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (43.581910448s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-891708 -n no-preload-891708
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (43.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nvwfs" [6bbe76ed-4e25-4043-8c23-5ede04e192b2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003279621s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-65gc8" [595db690-b0e3-4dd5-a48d-58b070b3d497] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003882591s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-nvwfs" [6bbe76ed-4e25-4043-8c23-5ede04e192b2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004776061s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-162905 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-65gc8" [595db690-b0e3-4dd5-a48d-58b070b3d497] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006816402s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-891708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-162905 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-162905 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-162905 -n old-k8s-version-162905
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-162905 -n old-k8s-version-162905: exit status 2 (301.319807ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-162905 -n old-k8s-version-162905
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-162905 -n old-k8s-version-162905: exit status 2 (292.705967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-162905 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-162905 -n old-k8s-version-162905
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-162905 -n old-k8s-version-162905
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-891708 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-891708 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-891708 -n no-preload-891708
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-891708 -n no-preload-891708: exit status 2 (313.127852ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-891708 -n no-preload-891708
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-891708 -n no-preload-891708: exit status 2 (326.661679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-891708 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-891708 -n no-preload-891708
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-891708 -n no-preload-891708
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-719648 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-719648 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (45.664693534s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-992320 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-992320 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (39.163608142s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-302787 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-302787 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (28.141836834s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-302787 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-302787 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-302787 --alsologtostderr -v=3: (2.391334487s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-302787 -n newest-cni-302787
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-302787 -n newest-cni-302787: exit status 7 (67.597823ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-302787 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-302787 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-302787 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (11.516896081s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-302787 -n newest-cni-302787
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-992320 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [339ae910-4d50-4cd3-955e-59c2733bcb1b] Pending
helpers_test.go:352: "busybox" [339ae910-4d50-4cd3-955e-59c2733bcb1b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [339ae910-4d50-4cd3-955e-59c2733bcb1b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003969023s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-992320 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-719648 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fd748915-8b5e-4e3d-95f4-8568cc7798c0] Pending
helpers_test.go:352: "busybox" [fd748915-8b5e-4e3d-95f4-8568cc7798c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fd748915-8b5e-4e3d-95f4-8568cc7798c0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004716653s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-719648 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-302787 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-302787 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-302787 -n newest-cni-302787
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-302787 -n newest-cni-302787: exit status 2 (317.388668ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-302787 -n newest-cni-302787
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-302787 -n newest-cni-302787: exit status 2 (293.8377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-302787 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-302787 -n newest-cni-302787
E0929 11:07:08.239431    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/addons-300979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-302787 -n newest-cni-302787
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.012758301s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-992320 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-992320 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.29666555s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-992320 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-719648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-719648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.191011065s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-719648 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (17.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-992320 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-992320 --alsologtostderr -v=3: (17.844841552s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (17.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-719648 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-719648 --alsologtostderr -v=3: (18.167387285s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m14.092274683s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-992320 -n default-k8s-diff-port-992320
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-992320 -n default-k8s-diff-port-992320: exit status 7 (82.191469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-992320 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-992320 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-992320 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (54.372061036s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-992320 -n default-k8s-diff-port-992320
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (54.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-719648 -n embed-certs-719648
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-719648 -n embed-certs-719648: exit status 7 (85.959331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-719648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-719648 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-719648 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (47.330986205s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-719648 -n embed-certs-719648
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-078909 "pgrep -a kubelet"
I0929 11:07:50.162137    7117 config.go:182] Loaded profile config "auto-078909": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-078909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jjxkt" [b8155768-e622-4ae8-82fa-4cb89af5cbbf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jjxkt" [b8155768-e622-4ae8-82fa-4cb89af5cbbf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004568906s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-078909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mwkx4" [ab34c45f-b349-4e18-bc4f-a6f36a65d014] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003456527s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m7.349952004s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mwkx4" [ab34c45f-b349-4e18-bc4f-a6f36a65d014] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003969737s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-719648 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-grwwx" [345fd5e3-861e-4e35-8ccb-54c3724cd9f6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003398487s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-th894" [a4d30edb-c372-4481-a51c-32f886ecf4f8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004242042s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-719648 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-719648 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-719648 -n embed-certs-719648
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-719648 -n embed-certs-719648: exit status 2 (302.2228ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-719648 -n embed-certs-719648
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-719648 -n embed-certs-719648: exit status 2 (315.627345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-719648 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-719648 -n embed-certs-719648
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-719648 -n embed-certs-719648
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-grwwx" [345fd5e3-861e-4e35-8ccb-54c3724cd9f6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006256838s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-992320 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-078909 "pgrep -a kubelet"
I0929 11:08:33.287473    7117 config.go:182] Loaded profile config "flannel-078909": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-078909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hgkcv" [1871f4fa-7b3e-4e33-8de8-e9d94bc97de1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hgkcv" [1871f4fa-7b3e-4e33-8de8-e9d94bc97de1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.005030966s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (38.670986312s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-992320 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-992320 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-992320 -n default-k8s-diff-port-992320
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-992320 -n default-k8s-diff-port-992320: exit status 2 (319.053866ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-992320 -n default-k8s-diff-port-992320
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-992320 -n default-k8s-diff-port-992320: exit status 2 (327.118852ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-992320 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-992320 -n default-k8s-diff-port-992320
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-992320 -n default-k8s-diff-port-992320
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.05s)
E0929 11:09:56.753378    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/no-preload-891708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:09:59.315147    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/no-preload-891708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-078909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (47.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (47.154258708s)
--- PASS: TestNetworkPlugins/group/calico/Start (47.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m11.860224559s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-078909 "pgrep -a kubelet"
I0929 11:09:13.857771    7117 config.go:182] Loaded profile config "bridge-078909": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-078909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7lwk9" [6e793b2e-85d5-46f2-99d8-e6a9cd33b6ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7lwk9" [6e793b2e-85d5-46f2-99d8-e6a9cd33b6ee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004073121s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-078909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-078909 "pgrep -a kubelet"
I0929 11:09:27.796227    7117 config.go:182] Loaded profile config "enable-default-cni-078909": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-078909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8tmrz" [4d345af1-0518-40d2-9179-1ad3e3272423] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8tmrz" [4d345af1-0518-40d2-9179-1ad3e3272423] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.003869062s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-wzmh6" [faf19326-8892-435a-acb0-05c6df9fb34c] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-wzmh6" [faf19326-8892-435a-acb0-05c6df9fb34c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003250484s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-078909 "pgrep -a kubelet"
I0929 11:09:35.961896    7117 config.go:182] Loaded profile config "calico-078909": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-078909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9smd4" [260846f3-13a8-4a13-ae49-c78f50c9137d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9smd4" [260846f3-13a8-4a13-ae49-c78f50c9137d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003671484s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-078909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (47.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0929 11:09:44.924090    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/old-k8s-version-162905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:09:44.930469    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/old-k8s-version-162905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:09:44.941947    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/old-k8s-version-162905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:09:44.963356    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/old-k8s-version-162905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:09:45.004791    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/old-k8s-version-162905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:09:45.086082    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/old-k8s-version-162905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:09:45.248123    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/old-k8s-version-162905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:09:45.569424    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/old-k8s-version-162905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-078909 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (47.128035818s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (47.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-078909 exec deployment/netcat -- nslookup kubernetes.default
E0929 11:09:46.211023    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/old-k8s-version-162905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-5ln5s" [6aa73d7f-899a-4dc1-a1e8-3537cf8352f6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00451018s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-078909 "pgrep -a kubelet"
I0929 11:10:21.648228    7117 config.go:182] Loaded profile config "kindnet-078909": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-078909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mr4hs" [ed2d8c6c-4c23-4be1-b350-dac077614aaf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mr4hs" [ed2d8c6c-4c23-4be1-b350-dac077614aaf] Running
E0929 11:10:25.900370    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/old-k8s-version-162905/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003308466s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-078909 "pgrep -a kubelet"
I0929 11:10:30.548185    7117 config.go:182] Loaded profile config "custom-flannel-078909": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-078909 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b84tg" [d2c43773-e2b9-4815-abfa-c588c813878a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b84tg" [d2c43773-e2b9-4815-abfa-c588c813878a] Running
E0929 11:10:35.161011    7117 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21657-3615/.minikube/profiles/no-preload-891708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003687301s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-078909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-078909 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-078909 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    

Test skip (27/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-300979 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-078909 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-078909

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-078909

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-078909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-078909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-078909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-078909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-078909

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-078909

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-078909

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-078909

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-078909

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-078909" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-078909" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-078909

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078909"

                                                
                                                
----------------------- debugLogs end: kubenet-078909 [took: 3.800962323s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-078909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-078909
--- SKIP: TestNetworkPlugins/group/kubenet (4.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-044889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-044889
--- SKIP: TestStartStop/group/disable-driver-mounts (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-078909 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-078909" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-078909

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-078909" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078909"

                                                
                                                
----------------------- debugLogs end: cilium-078909 [took: 4.431835188s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-078909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-078909
--- SKIP: TestNetworkPlugins/group/cilium (4.63s)

                                                
                                    
Copied to clipboard