Test Report: Docker_Linux_crio_arm64 21550

                    
                      0aba0a8e31d541259ffdeb45c9650281430067b8:2025-09-17:41464
                    
                

Test fail (6/332)

Order failed test Duration
37 TestAddons/parallel/Ingress 154.23
98 TestFunctional/parallel/ServiceCmdConnect 603.72
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.86
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
136 TestFunctional/parallel/ServiceCmd/Format 0.58
137 TestFunctional/parallel/ServiceCmd/URL 0.46
x
+
TestAddons/parallel/Ingress (154.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-160127 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-160127 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-160127 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2952fde7-ae1d-41db-97dd-5db12b951fae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2952fde7-ae1d-41db-97dd-5db12b951fae] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.009340296s
I0917 00:30:50.771670  859053 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-160127 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.976171378s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-160127 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-160127
helpers_test.go:243: (dbg) docker inspect addons-160127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a07e817d82f4cc2007cdc12de7abcbcc9e8d712045b29bdd211aeb08d510d4bf",
	        "Created": "2025-09-17T00:26:18.579212402Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 860220,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:26:18.637203745Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/a07e817d82f4cc2007cdc12de7abcbcc9e8d712045b29bdd211aeb08d510d4bf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a07e817d82f4cc2007cdc12de7abcbcc9e8d712045b29bdd211aeb08d510d4bf/hostname",
	        "HostsPath": "/var/lib/docker/containers/a07e817d82f4cc2007cdc12de7abcbcc9e8d712045b29bdd211aeb08d510d4bf/hosts",
	        "LogPath": "/var/lib/docker/containers/a07e817d82f4cc2007cdc12de7abcbcc9e8d712045b29bdd211aeb08d510d4bf/a07e817d82f4cc2007cdc12de7abcbcc9e8d712045b29bdd211aeb08d510d4bf-json.log",
	        "Name": "/addons-160127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-160127:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-160127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a07e817d82f4cc2007cdc12de7abcbcc9e8d712045b29bdd211aeb08d510d4bf",
	                "LowerDir": "/var/lib/docker/overlay2/6e38bbafe77b5adb3d2b52187533448d297d0ccc0dcd4a059f2779b0cf442f48-init/diff:/var/lib/docker/overlay2/cd42a5ab2cf4c74437647f2d8b0837602d53b1f49cb4003f87c861b49a5e1d53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6e38bbafe77b5adb3d2b52187533448d297d0ccc0dcd4a059f2779b0cf442f48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6e38bbafe77b5adb3d2b52187533448d297d0ccc0dcd4a059f2779b0cf442f48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6e38bbafe77b5adb3d2b52187533448d297d0ccc0dcd4a059f2779b0cf442f48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-160127",
	                "Source": "/var/lib/docker/volumes/addons-160127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-160127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-160127",
	                "name.minikube.sigs.k8s.io": "addons-160127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ebd7563570a18ef6a42567a75460b005e4e41f68f873fd236466700b180c3de1",
	            "SandboxKey": "/var/run/docker/netns/ebd7563570a1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33558"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33559"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33562"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33560"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33561"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-160127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:c4:a5:44:b9:07",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6207fa1d629a027abaf3edf45d53372fb0d3017478e9ec0065034abb085c59fe",
	                    "EndpointID": "5a93faba8bf5983c774e0b8610e6b9d08804ad6d4408464a90c1a81f98382248",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-160127",
	                        "a07e817d82f4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-160127 -n addons-160127
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-160127 logs -n 25: (1.684452926s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-206051                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-206051 │ jenkins │ v1.37.0 │ 17 Sep 25 00:25 UTC │ 17 Sep 25 00:25 UTC │
	│ start   │ --download-only -p binary-mirror-633798 --alsologtostderr --binary-mirror http://127.0.0.1:44367 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-633798   │ jenkins │ v1.37.0 │ 17 Sep 25 00:25 UTC │                     │
	│ delete  │ -p binary-mirror-633798                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-633798   │ jenkins │ v1.37.0 │ 17 Sep 25 00:25 UTC │ 17 Sep 25 00:25 UTC │
	│ addons  │ enable dashboard -p addons-160127                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:25 UTC │                     │
	│ addons  │ disable dashboard -p addons-160127                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:25 UTC │                     │
	│ start   │ -p addons-160127 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:25 UTC │ 17 Sep 25 00:28 UTC │
	│ addons  │ addons-160127 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:28 UTC │ 17 Sep 25 00:28 UTC │
	│ addons  │ addons-160127 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:29 UTC │ 17 Sep 25 00:29 UTC │
	│ addons  │ addons-160127 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:29 UTC │ 17 Sep 25 00:29 UTC │
	│ ip      │ addons-160127 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:29 UTC │ 17 Sep 25 00:29 UTC │
	│ addons  │ addons-160127 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:29 UTC │ 17 Sep 25 00:29 UTC │
	│ addons  │ addons-160127 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:29 UTC │ 17 Sep 25 00:29 UTC │
	│ addons  │ addons-160127 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:29 UTC │ 17 Sep 25 00:29 UTC │
	│ addons  │ enable headlamp -p addons-160127 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:29 UTC │ 17 Sep 25 00:29 UTC │
	│ ssh     │ addons-160127 ssh cat /opt/local-path-provisioner/pvc-e4aa6a01-96f9-4229-b8a9-878dadd04a59_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:29 UTC │ 17 Sep 25 00:29 UTC │
	│ addons  │ addons-160127 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:29 UTC │ 17 Sep 25 00:30 UTC │
	│ addons  │ addons-160127 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:29 UTC │ 17 Sep 25 00:29 UTC │
	│ addons  │ addons-160127 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ addons  │ addons-160127 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ ssh     │ addons-160127 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │                     │
	│ addons  │ addons-160127 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:30 UTC │
	│ addons  │ addons-160127 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:30 UTC │ 17 Sep 25 00:31 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-160127                                                                                                                                                                                                                                                                                                                                                                                           │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:31 UTC │ 17 Sep 25 00:31 UTC │
	│ addons  │ addons-160127 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:31 UTC │ 17 Sep 25 00:31 UTC │
	│ ip      │ addons-160127 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-160127          │ jenkins │ v1.37.0 │ 17 Sep 25 00:33 UTC │ 17 Sep 25 00:33 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:25:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:25:53.786400  859826 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:25:53.786535  859826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:25:53.786547  859826 out.go:374] Setting ErrFile to fd 2...
	I0917 00:25:53.786552  859826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:25:53.786821  859826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
	I0917 00:25:53.787266  859826 out.go:368] Setting JSON to false
	I0917 00:25:53.788089  859826 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11292,"bootTime":1758057462,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0917 00:25:53.788153  859826 start.go:140] virtualization:  
	I0917 00:25:53.791554  859826 out.go:179] * [addons-160127] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0917 00:25:53.794369  859826 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:25:53.794448  859826 notify.go:220] Checking for updates...
	I0917 00:25:53.800187  859826 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:25:53.803122  859826 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig
	I0917 00:25:53.805929  859826 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube
	I0917 00:25:53.808786  859826 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0917 00:25:53.811647  859826 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:25:53.814679  859826 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:25:53.834798  859826 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0917 00:25:53.834925  859826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:25:53.892252  859826 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-09-17 00:25:53.883090336 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0917 00:25:53.892382  859826 docker.go:318] overlay module found
	I0917 00:25:53.895574  859826 out.go:179] * Using the docker driver based on user configuration
	I0917 00:25:53.898567  859826 start.go:304] selected driver: docker
	I0917 00:25:53.898600  859826 start.go:918] validating driver "docker" against <nil>
	I0917 00:25:53.898615  859826 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:25:53.899361  859826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:25:53.957860  859826 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-09-17 00:25:53.94895762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0917 00:25:53.958022  859826 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 00:25:53.958275  859826 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:25:53.961430  859826 out.go:179] * Using Docker driver with root privileges
	I0917 00:25:53.964434  859826 cni.go:84] Creating CNI manager for ""
	I0917 00:25:53.964521  859826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 00:25:53.964534  859826 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 00:25:53.964675  859826 start.go:348] cluster config:
	{Name:addons-160127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-160127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0917 00:25:53.967865  859826 out.go:179] * Starting "addons-160127" primary control-plane node in "addons-160127" cluster
	I0917 00:25:53.970807  859826 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:25:53.973713  859826 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:25:53.976705  859826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:25:53.976771  859826 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-857204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0917 00:25:53.976781  859826 cache.go:58] Caching tarball of preloaded images
	I0917 00:25:53.976874  859826 preload.go:172] Found /home/jenkins/minikube-integration/21550-857204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0917 00:25:53.976883  859826 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:25:53.977245  859826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/config.json ...
	I0917 00:25:53.977267  859826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/config.json: {Name:mkbbda03941d54635db4964c8958d84ccf96f4cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:25:53.977426  859826 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:25:53.992920  859826 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0917 00:25:53.993059  859826 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0917 00:25:53.993084  859826 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0917 00:25:53.993089  859826 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0917 00:25:53.993097  859826 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0917 00:25:53.993112  859826 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0917 00:26:11.801250  859826 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0917 00:26:11.801295  859826 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:26:11.801335  859826 start.go:360] acquireMachinesLock for addons-160127: {Name:mk5eedec4108bad25f5b4366fc398793d7b60b94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:26:11.801461  859826 start.go:364] duration metric: took 107.866µs to acquireMachinesLock for "addons-160127"
	I0917 00:26:11.801489  859826 start.go:93] Provisioning new machine with config: &{Name:addons-160127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-160127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:26:11.801594  859826 start.go:125] createHost starting for "" (driver="docker")
	I0917 00:26:11.805058  859826 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0917 00:26:11.805312  859826 start.go:159] libmachine.API.Create for "addons-160127" (driver="docker")
	I0917 00:26:11.805348  859826 client.go:168] LocalClient.Create starting
	I0917 00:26:11.805469  859826 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca.pem
	I0917 00:26:11.967810  859826 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/cert.pem
	I0917 00:26:12.203772  859826 cli_runner.go:164] Run: docker network inspect addons-160127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 00:26:12.218344  859826 cli_runner.go:211] docker network inspect addons-160127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 00:26:12.218457  859826 network_create.go:284] running [docker network inspect addons-160127] to gather additional debugging logs...
	I0917 00:26:12.218480  859826 cli_runner.go:164] Run: docker network inspect addons-160127
	W0917 00:26:12.233897  859826 cli_runner.go:211] docker network inspect addons-160127 returned with exit code 1
	I0917 00:26:12.233933  859826 network_create.go:287] error running [docker network inspect addons-160127]: docker network inspect addons-160127: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-160127 not found
	I0917 00:26:12.233947  859826 network_create.go:289] output of [docker network inspect addons-160127]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-160127 not found
	
	** /stderr **
	I0917 00:26:12.234063  859826 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:26:12.250582  859826 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018335a0}
	I0917 00:26:12.250623  859826 network_create.go:124] attempt to create docker network addons-160127 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 00:26:12.250684  859826 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-160127 addons-160127
	I0917 00:26:12.311919  859826 network_create.go:108] docker network addons-160127 192.168.49.0/24 created
	I0917 00:26:12.311968  859826 kic.go:121] calculated static IP "192.168.49.2" for the "addons-160127" container
	I0917 00:26:12.312043  859826 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 00:26:12.327222  859826 cli_runner.go:164] Run: docker volume create addons-160127 --label name.minikube.sigs.k8s.io=addons-160127 --label created_by.minikube.sigs.k8s.io=true
	I0917 00:26:12.345812  859826 oci.go:103] Successfully created a docker volume addons-160127
	I0917 00:26:12.345915  859826 cli_runner.go:164] Run: docker run --rm --name addons-160127-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-160127 --entrypoint /usr/bin/test -v addons-160127:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0917 00:26:14.323356  859826 cli_runner.go:217] Completed: docker run --rm --name addons-160127-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-160127 --entrypoint /usr/bin/test -v addons-160127:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (1.977401105s)
	I0917 00:26:14.323386  859826 oci.go:107] Successfully prepared a docker volume addons-160127
	I0917 00:26:14.323414  859826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:26:14.323441  859826 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 00:26:14.323505  859826 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-857204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-160127:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 00:26:18.511702  859826 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21550-857204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-160127:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.188161015s)
	I0917 00:26:18.511735  859826 kic.go:203] duration metric: took 4.188290345s to extract preloaded images to volume ...
	W0917 00:26:18.511889  859826 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0917 00:26:18.512002  859826 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 00:26:18.564906  859826 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-160127 --name addons-160127 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-160127 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-160127 --network addons-160127 --ip 192.168.49.2 --volume addons-160127:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0917 00:26:18.864360  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Running}}
	I0917 00:26:18.890002  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:18.914709  859826 cli_runner.go:164] Run: docker exec addons-160127 stat /var/lib/dpkg/alternatives/iptables
	I0917 00:26:18.965301  859826 oci.go:144] the created container "addons-160127" has a running status.
	I0917 00:26:18.965327  859826 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa...
	I0917 00:26:19.602563  859826 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 00:26:19.623611  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:19.650398  859826 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 00:26:19.650423  859826 kic_runner.go:114] Args: [docker exec --privileged addons-160127 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 00:26:19.712014  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:19.733819  859826 machine.go:93] provisionDockerMachine start ...
	I0917 00:26:19.733911  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:19.756151  859826 main.go:141] libmachine: Using SSH client type: native
	I0917 00:26:19.756473  859826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I0917 00:26:19.756483  859826 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:26:19.908407  859826 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-160127
	
	I0917 00:26:19.908491  859826 ubuntu.go:182] provisioning hostname "addons-160127"
	I0917 00:26:19.908608  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:19.929971  859826 main.go:141] libmachine: Using SSH client type: native
	I0917 00:26:19.930278  859826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I0917 00:26:19.930293  859826 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-160127 && echo "addons-160127" | sudo tee /etc/hostname
	I0917 00:26:20.087255  859826 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-160127
	
	I0917 00:26:20.087358  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:20.105508  859826 main.go:141] libmachine: Using SSH client type: native
	I0917 00:26:20.105828  859826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I0917 00:26:20.105852  859826 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-160127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-160127/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-160127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:26:20.244506  859826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:26:20.244534  859826 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-857204/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-857204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-857204/.minikube}
	I0917 00:26:20.244581  859826 ubuntu.go:190] setting up certificates
	I0917 00:26:20.244598  859826 provision.go:84] configureAuth start
	I0917 00:26:20.244665  859826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-160127
	I0917 00:26:20.261208  859826 provision.go:143] copyHostCerts
	I0917 00:26:20.261294  859826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-857204/.minikube/ca.pem (1078 bytes)
	I0917 00:26:20.261423  859826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-857204/.minikube/cert.pem (1123 bytes)
	I0917 00:26:20.261483  859826 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-857204/.minikube/key.pem (1679 bytes)
	I0917 00:26:20.261534  859826 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-857204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca-key.pem org=jenkins.addons-160127 san=[127.0.0.1 192.168.49.2 addons-160127 localhost minikube]
	I0917 00:26:20.600736  859826 provision.go:177] copyRemoteCerts
	I0917 00:26:20.600801  859826 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:26:20.600843  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:20.617131  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:20.717122  859826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:26:20.741239  859826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 00:26:20.764242  859826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0917 00:26:20.787635  859826 provision.go:87] duration metric: took 543.011973ms to configureAuth
	I0917 00:26:20.787661  859826 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:26:20.787850  859826 config.go:182] Loaded profile config "addons-160127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:26:20.787954  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:20.805462  859826 main.go:141] libmachine: Using SSH client type: native
	I0917 00:26:20.805771  859826 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33558 <nil> <nil>}
	I0917 00:26:20.805794  859826 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:26:21.053683  859826 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:26:21.053713  859826 machine.go:96] duration metric: took 1.319872906s to provisionDockerMachine
	I0917 00:26:21.053723  859826 client.go:171] duration metric: took 9.248364093s to LocalClient.Create
	I0917 00:26:21.053745  859826 start.go:167] duration metric: took 9.248435971s to libmachine.API.Create "addons-160127"
	I0917 00:26:21.053753  859826 start.go:293] postStartSetup for "addons-160127" (driver="docker")
	I0917 00:26:21.053764  859826 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:26:21.053837  859826 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:26:21.053883  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:21.071416  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:21.169730  859826 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:26:21.172978  859826 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:26:21.173014  859826 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:26:21.173025  859826 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:26:21.173033  859826 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:26:21.173044  859826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-857204/.minikube/addons for local assets ...
	I0917 00:26:21.173117  859826 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-857204/.minikube/files for local assets ...
	I0917 00:26:21.173145  859826 start.go:296] duration metric: took 119.385006ms for postStartSetup
	I0917 00:26:21.173464  859826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-160127
	I0917 00:26:21.189991  859826 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/config.json ...
	I0917 00:26:21.190276  859826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:26:21.190331  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:21.206873  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:21.301488  859826 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:26:21.306141  859826 start.go:128] duration metric: took 9.504530477s to createHost
	I0917 00:26:21.306167  859826 start.go:83] releasing machines lock for "addons-160127", held for 9.504696s
	I0917 00:26:21.306236  859826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-160127
	I0917 00:26:21.322664  859826 ssh_runner.go:195] Run: cat /version.json
	I0917 00:26:21.322728  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:21.322973  859826 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:26:21.323032  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:21.340717  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:21.342793  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:21.580588  859826 ssh_runner.go:195] Run: systemctl --version
	I0917 00:26:21.585187  859826 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:26:21.730667  859826 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:26:21.735003  859826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:26:21.756022  859826 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:26:21.756111  859826 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:26:21.790910  859826 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 00:26:21.790936  859826 start.go:495] detecting cgroup driver to use...
	I0917 00:26:21.790967  859826 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 00:26:21.791016  859826 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:26:21.806983  859826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:26:21.818886  859826 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:26:21.818951  859826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:26:21.833293  859826 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:26:21.848673  859826 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:26:21.929630  859826 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:26:22.018125  859826 docker.go:234] disabling docker service ...
	I0917 00:26:22.018223  859826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:26:22.038407  859826 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:26:22.050803  859826 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:26:22.138001  859826 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:26:22.236882  859826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:26:22.248373  859826 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:26:22.266399  859826 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:26:22.266466  859826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:26:22.276257  859826 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 00:26:22.276379  859826 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:26:22.286212  859826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:26:22.295987  859826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:26:22.305812  859826 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:26:22.314560  859826 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:26:22.324389  859826 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:26:22.340109  859826 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:26:22.349788  859826 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:26:22.358141  859826 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:26:22.366981  859826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:26:22.448879  859826 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:26:22.575884  859826 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:26:22.575974  859826 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:26:22.579725  859826 start.go:563] Will wait 60s for crictl version
	I0917 00:26:22.579791  859826 ssh_runner.go:195] Run: which crictl
	I0917 00:26:22.583461  859826 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:26:22.625058  859826 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:26:22.625235  859826 ssh_runner.go:195] Run: crio --version
	I0917 00:26:22.662843  859826 ssh_runner.go:195] Run: crio --version
	I0917 00:26:22.706599  859826 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:26:22.709511  859826 cli_runner.go:164] Run: docker network inspect addons-160127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:26:22.726510  859826 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:26:22.730011  859826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:26:22.741044  859826 kubeadm.go:875] updating cluster {Name:addons-160127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-160127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:26:22.741155  859826 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:26:22.741218  859826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:26:22.822217  859826 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:26:22.822249  859826 crio.go:433] Images already preloaded, skipping extraction
	I0917 00:26:22.822309  859826 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:26:22.860337  859826 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:26:22.860364  859826 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:26:22.860372  859826 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0917 00:26:22.860472  859826 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-160127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-160127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:26:22.860584  859826 ssh_runner.go:195] Run: crio config
	I0917 00:26:22.909471  859826 cni.go:84] Creating CNI manager for ""
	I0917 00:26:22.909495  859826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 00:26:22.909506  859826 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:26:22.909528  859826 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-160127 NodeName:addons-160127 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:26:22.909653  859826 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-160127"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:26:22.909725  859826 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:26:22.918732  859826 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:26:22.918814  859826 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 00:26:22.927844  859826 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0917 00:26:22.946544  859826 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:26:22.964800  859826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0917 00:26:22.983104  859826 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0917 00:26:22.986617  859826 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 00:26:22.997517  859826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:26:23.089905  859826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:26:23.104125  859826 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127 for IP: 192.168.49.2
	I0917 00:26:23.104153  859826 certs.go:194] generating shared ca certs ...
	I0917 00:26:23.104172  859826 certs.go:226] acquiring lock for ca certs: {Name:mk44de2cd489e13684c1d414a8a1e69ffc09119b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:26:23.104318  859826 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-857204/.minikube/ca.key
	I0917 00:26:24.353387  859826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-857204/.minikube/ca.crt ...
	I0917 00:26:24.353421  859826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-857204/.minikube/ca.crt: {Name:mk29d9c0a535aec4dd0d2ed09acebf76e3762316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:26:24.353644  859826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-857204/.minikube/ca.key ...
	I0917 00:26:24.353661  859826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-857204/.minikube/ca.key: {Name:mk1066ca0f59ffa35b4b7021a5ddd2aa6a556ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:26:24.353753  859826 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-857204/.minikube/proxy-client-ca.key
	I0917 00:26:24.913182  859826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-857204/.minikube/proxy-client-ca.crt ...
	I0917 00:26:24.913216  859826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-857204/.minikube/proxy-client-ca.crt: {Name:mkaad4acd3965032de2b7652d54a9bb005b9056e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:26:24.913400  859826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-857204/.minikube/proxy-client-ca.key ...
	I0917 00:26:24.913413  859826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-857204/.minikube/proxy-client-ca.key: {Name:mkdaed60f12f2a41fdf0a33621bb1a1f05c21b09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:26:24.913496  859826 certs.go:256] generating profile certs ...
	I0917 00:26:24.913562  859826 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.key
	I0917 00:26:24.913583  859826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt with IP's: []
	I0917 00:26:25.373155  859826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt ...
	I0917 00:26:25.373187  859826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: {Name:mkb0a7bbad1fd023e487b41f317e502c74fbe9c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:26:25.373373  859826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.key ...
	I0917 00:26:25.373387  859826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.key: {Name:mk95b8507944c711e5c84eb66f0bc762e5bde4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:26:25.373468  859826 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/apiserver.key.bd0c9ece
	I0917 00:26:25.373493  859826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/apiserver.crt.bd0c9ece with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0917 00:26:26.836926  859826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/apiserver.crt.bd0c9ece ...
	I0917 00:26:26.836967  859826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/apiserver.crt.bd0c9ece: {Name:mk88be9870c73c5c123760efa95ca1ee61dc5b67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:26:26.837151  859826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/apiserver.key.bd0c9ece ...
	I0917 00:26:26.837166  859826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/apiserver.key.bd0c9ece: {Name:mk8bb55cecd5b2f179cf58e99f2199cbe6224a63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:26:26.837254  859826 certs.go:381] copying /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/apiserver.crt.bd0c9ece -> /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/apiserver.crt
	I0917 00:26:26.837331  859826 certs.go:385] copying /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/apiserver.key.bd0c9ece -> /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/apiserver.key
	I0917 00:26:26.837387  859826 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/proxy-client.key
	I0917 00:26:26.837408  859826 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/proxy-client.crt with IP's: []
	I0917 00:26:27.155397  859826 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/proxy-client.crt ...
	I0917 00:26:27.155437  859826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/proxy-client.crt: {Name:mk52163e9975c15585e6ef57b33152488b1a14bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:26:27.155624  859826 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/proxy-client.key ...
	I0917 00:26:27.155638  859826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/proxy-client.key: {Name:mk2e68a031d39c85661177b06da0b707556ee3f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:26:27.155830  859826 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:26:27.155874  859826 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:26:27.155904  859826 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:26:27.155930  859826 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/key.pem (1679 bytes)
	I0917 00:26:27.156517  859826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:26:27.183424  859826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:26:27.207836  859826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:26:27.231671  859826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:26:27.255198  859826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 00:26:27.278823  859826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:26:27.302487  859826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:26:27.325503  859826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 00:26:27.349017  859826 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:26:27.372445  859826 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:26:27.390131  859826 ssh_runner.go:195] Run: openssl version
	I0917 00:26:27.395548  859826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:26:27.404625  859826 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:26:27.407991  859826 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 00:26 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:26:27.408058  859826 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:26:27.414812  859826 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:26:27.423572  859826 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:26:27.426677  859826 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 00:26:27.426727  859826 kubeadm.go:392] StartCluster: {Name:addons-160127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-160127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:26:27.426800  859826 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 00:26:27.426855  859826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:26:27.462887  859826 cri.go:89] found id: ""
	I0917 00:26:27.462969  859826 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 00:26:27.471614  859826 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 00:26:27.480102  859826 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 00:26:27.480166  859826 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 00:26:27.488943  859826 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 00:26:27.489002  859826 kubeadm.go:157] found existing configuration files:
	
	I0917 00:26:27.489054  859826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 00:26:27.497406  859826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 00:26:27.497475  859826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 00:26:27.505467  859826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 00:26:27.514315  859826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 00:26:27.514386  859826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 00:26:27.523050  859826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 00:26:27.531898  859826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 00:26:27.531961  859826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 00:26:27.540533  859826 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 00:26:27.549638  859826 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 00:26:27.549743  859826 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 00:26:27.558538  859826 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 00:26:27.596375  859826 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0917 00:26:27.596603  859826 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 00:26:27.617702  859826 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 00:26:27.617782  859826 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0917 00:26:27.617823  859826 kubeadm.go:310] OS: Linux
	I0917 00:26:27.617876  859826 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 00:26:27.617930  859826 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0917 00:26:27.617983  859826 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 00:26:27.618037  859826 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 00:26:27.618091  859826 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 00:26:27.618145  859826 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 00:26:27.618196  859826 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 00:26:27.618249  859826 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 00:26:27.618301  859826 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0917 00:26:27.694450  859826 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 00:26:27.694569  859826 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 00:26:27.694668  859826 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 00:26:27.701652  859826 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 00:26:27.707745  859826 out.go:252]   - Generating certificates and keys ...
	I0917 00:26:27.707852  859826 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 00:26:27.707928  859826 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 00:26:27.948650  859826 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 00:26:28.217006  859826 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 00:26:28.388827  859826 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 00:26:29.365664  859826 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 00:26:30.187415  859826 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 00:26:30.187769  859826 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-160127 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 00:26:30.696927  859826 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 00:26:30.697092  859826 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-160127 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 00:26:32.263931  859826 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 00:26:32.512052  859826 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 00:26:32.596311  859826 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 00:26:32.596589  859826 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 00:26:34.008107  859826 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 00:26:34.923374  859826 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 00:26:35.902321  859826 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 00:26:36.386548  859826 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 00:26:37.432480  859826 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 00:26:37.436015  859826 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 00:26:37.439835  859826 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 00:26:37.443291  859826 out.go:252]   - Booting up control plane ...
	I0917 00:26:37.443406  859826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 00:26:37.443682  859826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 00:26:37.444874  859826 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 00:26:37.463545  859826 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 00:26:37.463664  859826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0917 00:26:37.469940  859826 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0917 00:26:37.470274  859826 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 00:26:37.470519  859826 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 00:26:37.562965  859826 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 00:26:37.563092  859826 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 00:26:39.564625  859826 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.00167518s
	I0917 00:26:39.567903  859826 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0917 00:26:39.568007  859826 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0917 00:26:39.568307  859826 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0917 00:26:39.568401  859826 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0917 00:26:44.113138  859826 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.544695303s
	I0917 00:26:45.465752  859826 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.897787335s
	I0917 00:26:46.070117  859826 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.501932475s
	I0917 00:26:46.093549  859826 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 00:26:46.109044  859826 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 00:26:46.123079  859826 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 00:26:46.123291  859826 kubeadm.go:310] [mark-control-plane] Marking the node addons-160127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 00:26:46.134796  859826 kubeadm.go:310] [bootstrap-token] Using token: 8p5m09.kp68ge1vysaff7qn
	I0917 00:26:46.139717  859826 out.go:252]   - Configuring RBAC rules ...
	I0917 00:26:46.139863  859826 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 00:26:46.141653  859826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 00:26:46.151713  859826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 00:26:46.155749  859826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 00:26:46.159501  859826 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 00:26:46.163223  859826 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 00:26:46.477174  859826 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 00:26:46.956972  859826 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 00:26:47.476790  859826 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 00:26:47.478134  859826 kubeadm.go:310] 
	I0917 00:26:47.478236  859826 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 00:26:47.478246  859826 kubeadm.go:310] 
	I0917 00:26:47.478327  859826 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 00:26:47.478336  859826 kubeadm.go:310] 
	I0917 00:26:47.478367  859826 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 00:26:47.478439  859826 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 00:26:47.478494  859826 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 00:26:47.478503  859826 kubeadm.go:310] 
	I0917 00:26:47.478559  859826 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 00:26:47.478568  859826 kubeadm.go:310] 
	I0917 00:26:47.478617  859826 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 00:26:47.478625  859826 kubeadm.go:310] 
	I0917 00:26:47.478680  859826 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 00:26:47.478762  859826 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 00:26:47.478836  859826 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 00:26:47.478845  859826 kubeadm.go:310] 
	I0917 00:26:47.478933  859826 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 00:26:47.479017  859826 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 00:26:47.479025  859826 kubeadm.go:310] 
	I0917 00:26:47.479125  859826 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 8p5m09.kp68ge1vysaff7qn \
	I0917 00:26:47.479239  859826 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1d4f282d631102b8970b3f766d0f4eef321ee130dc16087fd0ddbf8eeb066b38 \
	I0917 00:26:47.479264  859826 kubeadm.go:310] 	--control-plane 
	I0917 00:26:47.479272  859826 kubeadm.go:310] 
	I0917 00:26:47.479361  859826 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 00:26:47.479369  859826 kubeadm.go:310] 
	I0917 00:26:47.479474  859826 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8p5m09.kp68ge1vysaff7qn \
	I0917 00:26:47.479585  859826 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1d4f282d631102b8970b3f766d0f4eef321ee130dc16087fd0ddbf8eeb066b38 
	I0917 00:26:47.483418  859826 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0917 00:26:47.483684  859826 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0917 00:26:47.483800  859826 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 00:26:47.483822  859826 cni.go:84] Creating CNI manager for ""
	I0917 00:26:47.483829  859826 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 00:26:47.487043  859826 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0917 00:26:47.489968  859826 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 00:26:47.493622  859826 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0917 00:26:47.493643  859826 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 00:26:47.512922  859826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 00:26:47.781035  859826 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 00:26:47.781169  859826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:26:47.781260  859826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-160127 minikube.k8s.io/updated_at=2025_09_17T00_26_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a minikube.k8s.io/name=addons-160127 minikube.k8s.io/primary=true
	I0917 00:26:47.793546  859826 ops.go:34] apiserver oom_adj: -16
	I0917 00:26:47.907529  859826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:26:48.408027  859826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:26:48.908045  859826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:26:49.407598  859826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:26:49.908597  859826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:26:50.408308  859826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:26:50.908057  859826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:26:51.408545  859826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:26:51.908044  859826 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 00:26:52.002571  859826 kubeadm.go:1105] duration metric: took 4.221445753s to wait for elevateKubeSystemPrivileges
	I0917 00:26:52.002601  859826 kubeadm.go:394] duration metric: took 24.57587833s to StartCluster
	I0917 00:26:52.002618  859826 settings.go:142] acquiring lock: {Name:mk94fbfa40f18dd5094489d1f6af74533ca88b5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:26:52.002769  859826 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21550-857204/kubeconfig
	I0917 00:26:52.003163  859826 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21550-857204/kubeconfig: {Name:mk1a5767e29a038e204e9c44cf5784461133d254 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:26:52.003411  859826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 00:26:52.003425  859826 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 00:26:52.003510  859826 addons.go:69] Setting yakd=true in profile "addons-160127"
	I0917 00:26:52.003404  859826 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0917 00:26:52.003524  859826 addons.go:238] Setting addon yakd=true in "addons-160127"
	I0917 00:26:52.003723  859826 config.go:182] Loaded profile config "addons-160127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:26:52.003749  859826 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-160127"
	I0917 00:26:52.003757  859826 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-160127"
	I0917 00:26:52.003772  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.004212  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.003547  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.005019  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.005361  859826 addons.go:69] Setting cloud-spanner=true in profile "addons-160127"
	I0917 00:26:52.005393  859826 addons.go:238] Setting addon cloud-spanner=true in "addons-160127"
	I0917 00:26:52.005429  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.005982  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.014972  859826 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-160127"
	I0917 00:26:52.015052  859826 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-160127"
	I0917 00:26:52.015092  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.015565  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.018114  859826 out.go:179] * Verifying Kubernetes components...
	I0917 00:26:52.021486  859826 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:26:52.021548  859826 addons.go:69] Setting registry=true in profile "addons-160127"
	I0917 00:26:52.021570  859826 addons.go:238] Setting addon registry=true in "addons-160127"
	I0917 00:26:52.021605  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.022060  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.022376  859826 addons.go:69] Setting registry-creds=true in profile "addons-160127"
	I0917 00:26:52.022402  859826 addons.go:238] Setting addon registry-creds=true in "addons-160127"
	I0917 00:26:52.022428  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.022460  859826 addons.go:69] Setting default-storageclass=true in profile "addons-160127"
	I0917 00:26:52.022487  859826 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-160127"
	I0917 00:26:52.022827  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.022834  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.036795  859826 addons.go:69] Setting gcp-auth=true in profile "addons-160127"
	I0917 00:26:52.036885  859826 mustload.go:65] Loading cluster: addons-160127
	I0917 00:26:52.037129  859826 config.go:182] Loaded profile config "addons-160127": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:26:52.037438  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.040750  859826 addons.go:69] Setting storage-provisioner=true in profile "addons-160127"
	I0917 00:26:52.040784  859826 addons.go:238] Setting addon storage-provisioner=true in "addons-160127"
	I0917 00:26:52.040820  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.041286  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.051493  859826 addons.go:69] Setting ingress=true in profile "addons-160127"
	I0917 00:26:52.051572  859826 addons.go:238] Setting addon ingress=true in "addons-160127"
	I0917 00:26:52.051646  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.054739  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.060628  859826 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-160127"
	I0917 00:26:52.060666  859826 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-160127"
	I0917 00:26:52.061009  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.083123  859826 addons.go:69] Setting volcano=true in profile "addons-160127"
	I0917 00:26:52.083159  859826 addons.go:238] Setting addon volcano=true in "addons-160127"
	I0917 00:26:52.083198  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.083687  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.086352  859826 addons.go:69] Setting ingress-dns=true in profile "addons-160127"
	I0917 00:26:52.086387  859826 addons.go:238] Setting addon ingress-dns=true in "addons-160127"
	I0917 00:26:52.086430  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.086876  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.106745  859826 addons.go:69] Setting volumesnapshots=true in profile "addons-160127"
	I0917 00:26:52.106779  859826 addons.go:238] Setting addon volumesnapshots=true in "addons-160127"
	I0917 00:26:52.106816  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.107300  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.109696  859826 addons.go:69] Setting inspektor-gadget=true in profile "addons-160127"
	I0917 00:26:52.109736  859826 addons.go:238] Setting addon inspektor-gadget=true in "addons-160127"
	I0917 00:26:52.109772  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.110216  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.134441  859826 addons.go:69] Setting metrics-server=true in profile "addons-160127"
	I0917 00:26:52.134478  859826 addons.go:238] Setting addon metrics-server=true in "addons-160127"
	I0917 00:26:52.134516  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.134982  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.158019  859826 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-160127"
	I0917 00:26:52.158050  859826 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-160127"
	I0917 00:26:52.158091  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.158544  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.224893  859826 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 00:26:52.264770  859826 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 00:26:52.266393  859826 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 00:26:52.266715  859826 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0917 00:26:52.286379  859826 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0917 00:26:52.291893  859826 out.go:179]   - Using image docker.io/registry:3.0.0
	I0917 00:26:52.297973  859826 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 00:26:52.298047  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 00:26:52.298145  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.308656  859826 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-160127"
	I0917 00:26:52.308706  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.309147  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.325481  859826 addons.go:238] Setting addon default-storageclass=true in "addons-160127"
	I0917 00:26:52.325531  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.325950  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:26:52.327461  859826 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0917 00:26:52.328826  859826 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0917 00:26:52.328845  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0917 00:26:52.328911  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.331233  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:26:52.333475  859826 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 00:26:52.333795  859826 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	W0917 00:26:52.334524  859826 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0917 00:26:52.334761  859826 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0917 00:26:52.334775  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 00:26:52.334843  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.341482  859826 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 00:26:52.341506  859826 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 00:26:52.341571  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.366226  859826 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0917 00:26:52.366250  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0917 00:26:52.366316  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.366515  859826 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0917 00:26:52.372617  859826 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 00:26:52.372647  859826 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0917 00:26:52.372719  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.384988  859826 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 00:26:52.387902  859826 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 00:26:52.392285  859826 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 00:26:52.394660  859826 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0917 00:26:52.398696  859826 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 00:26:52.418713  859826 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 00:26:52.429014  859826 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:26:52.429094  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 00:26:52.429194  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.437991  859826 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0917 00:26:52.438223  859826 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0917 00:26:52.438373  859826 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 00:26:52.451541  859826 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0917 00:26:52.451550  859826 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0917 00:26:52.457910  859826 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 00:26:52.457985  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0917 00:26:52.458085  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.464796  859826 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 00:26:52.465344  859826 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 00:26:52.465431  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.479367  859826 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 00:26:52.479775  859826 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 00:26:52.479790  859826 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 00:26:52.479860  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.482995  859826 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 00:26:52.483963  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 00:26:52.484035  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.506653  859826 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 00:26:52.512661  859826 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 00:26:52.512698  859826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 00:26:52.512770  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.513127  859826 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 00:26:52.513186  859826 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 00:26:52.513278  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.534614  859826 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0917 00:26:52.541019  859826 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 00:26:52.541046  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 00:26:52.541114  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.559359  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:52.563257  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:52.568687  859826 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 00:26:52.571370  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:52.571381  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:52.575701  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:52.577660  859826 out.go:179]   - Using image docker.io/busybox:stable
	I0917 00:26:52.580845  859826 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 00:26:52.580866  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 00:26:52.580936  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:26:52.600865  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:52.685592  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:52.700851  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:52.710981  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:52.716141  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:52.724711  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:52.726269  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	W0917 00:26:52.733408  859826 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 00:26:52.733441  859826 retry.go:31] will retry after 322.948129ms: ssh: handshake failed: EOF
	I0917 00:26:52.741747  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:52.749048  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:26:52.752373  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	W0917 00:26:52.754022  859826 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 00:26:52.754044  859826 retry.go:31] will retry after 275.205265ms: ssh: handshake failed: EOF
	I0917 00:26:52.813021  859826 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:26:53.056162  859826 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 00:26:53.056246  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 00:26:53.060161  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0917 00:26:53.067405  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0917 00:26:53.074772  859826 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:26:53.074843  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0917 00:26:53.094471  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 00:26:53.101449  859826 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 00:26:53.101477  859826 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 00:26:53.112146  859826 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 00:26:53.112184  859826 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 00:26:53.119584  859826 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 00:26:53.119606  859826 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 00:26:53.143844  859826 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 00:26:53.143920  859826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 00:26:53.184552  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 00:26:53.220732  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 00:26:53.223698  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 00:26:53.275243  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 00:26:53.280186  859826 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 00:26:53.280267  859826 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 00:26:53.283566  859826 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 00:26:53.283635  859826 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 00:26:53.302034  859826 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 00:26:53.302108  859826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 00:26:53.303016  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:26:53.303304  859826 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 00:26:53.303344  859826 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 00:26:53.331848  859826 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 00:26:53.331923  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 00:26:53.437018  859826 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 00:26:53.437098  859826 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 00:26:53.496612  859826 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 00:26:53.496705  859826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 00:26:53.507062  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 00:26:53.510612  859826 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 00:26:53.510686  859826 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 00:26:53.544410  859826 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 00:26:53.544493  859826 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 00:26:53.571829  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 00:26:53.602941  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 00:26:53.629559  859826 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 00:26:53.629637  859826 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 00:26:53.690219  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 00:26:53.696135  859826 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 00:26:53.696211  859826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 00:26:53.714691  859826 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 00:26:53.714763  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 00:26:53.786425  859826 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 00:26:53.786499  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 00:26:53.868054  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 00:26:53.872676  859826 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 00:26:53.872698  859826 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 00:26:53.954331  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 00:26:54.030161  859826 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 00:26:54.030190  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 00:26:54.130072  859826 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 00:26:54.130096  859826 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 00:26:54.198009  859826 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 00:26:54.198033  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 00:26:54.216800  859826 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 00:26:54.216824  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 00:26:54.236762  859826 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 00:26:54.236791  859826 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 00:26:54.262353  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 00:26:55.373862  859826 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.107437122s)
	I0917 00:26:55.373939  859826 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0917 00:26:55.374529  859826 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.561469016s)
	I0917 00:26:55.376081  859826 node_ready.go:35] waiting up to 6m0s for node "addons-160127" to be "Ready" ...
	I0917 00:26:56.095464  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.035205698s)
	I0917 00:26:56.163240  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.095739691s)
	I0917 00:26:56.268376  859826 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-160127" context rescaled to 1 replicas
	I0917 00:26:57.149762  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.965124062s)
	I0917 00:26:57.149864  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.929070408s)
	I0917 00:26:57.149889  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.055149178s)
	I0917 00:26:57.149935  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.92616962s)
	I0917 00:26:57.295131  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.019805057s)
	W0917 00:26:57.393719  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:26:57.420352  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.117266423s)
	W0917 00:26:57.420434  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:26:57.420508  859826 retry.go:31] will retry after 212.688669ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:26:57.420533  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.913352163s)
	I0917 00:26:57.420633  859826 addons.go:479] Verifying addon registry=true in "addons-160127"
	I0917 00:26:57.425783  859826 out.go:179] * Verifying registry addon...
	I0917 00:26:57.429560  859826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 00:26:57.449149  859826 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 00:26:57.449169  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:26:57.633770  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:26:57.989685  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:26:58.292219  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.720305861s)
	I0917 00:26:58.292299  859826 addons.go:479] Verifying addon ingress=true in "addons-160127"
	I0917 00:26:58.292509  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.689493613s)
	I0917 00:26:58.292694  859826 addons.go:479] Verifying addon metrics-server=true in "addons-160127"
	I0917 00:26:58.292741  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.602450979s)
	I0917 00:26:58.292859  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.424774703s)
	I0917 00:26:58.295605  859826 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-160127 service yakd-dashboard -n yakd-dashboard
	
	I0917 00:26:58.295790  859826 out.go:179] * Verifying ingress addon...
	I0917 00:26:58.299812  859826 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 00:26:58.322250  859826 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 00:26:58.322270  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:26:58.455743  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:26:58.477031  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.522651912s)
	W0917 00:26:58.477124  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 00:26:58.477159  859826 retry.go:31] will retry after 338.546739ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 00:26:58.816210  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 00:26:58.816452  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:26:58.931084  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.668686861s)
	I0917 00:26:58.931163  859826 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-160127"
	I0917 00:26:58.931357  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.297564285s)
	W0917 00:26:58.931423  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:26:58.931466  859826 retry.go:31] will retry after 201.194281ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:26:58.934167  859826 out.go:179] * Verifying csi-hostpath-driver addon...
	I0917 00:26:58.937830  859826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 00:26:58.944215  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:26:58.958871  859826 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 00:26:58.958954  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:26:59.133221  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:26:59.308886  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:26:59.435193  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:26:59.456712  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:26:59.804527  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:26:59.879892  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:26:59.933056  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:26:59.946066  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:00.312575  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:00.330958  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.197633177s)
	W0917 00:27:00.330999  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:00.331050  859826 retry.go:31] will retry after 638.970591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:00.433937  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:00.442665  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:00.803905  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:00.933181  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:00.941237  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:00.970434  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:27:01.304038  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:01.433760  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:01.445446  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0917 00:27:01.782688  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:01.782771  859826 retry.go:31] will retry after 464.578387ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:01.804172  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:01.880484  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:01.933199  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:01.941274  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:02.248501  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:27:02.304827  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:02.433189  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:02.444857  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:02.573494  859826 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 00:27:02.573632  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:27:02.598564  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:27:02.725672  859826 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 00:27:02.744397  859826 addons.go:238] Setting addon gcp-auth=true in "addons-160127"
	I0917 00:27:02.744457  859826 host.go:66] Checking if "addons-160127" exists ...
	I0917 00:27:02.744990  859826 cli_runner.go:164] Run: docker container inspect addons-160127 --format={{.State.Status}}
	I0917 00:27:02.765645  859826 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 00:27:02.765698  859826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-160127
	I0917 00:27:02.790684  859826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33558 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/addons-160127/id_rsa Username:docker}
	I0917 00:27:02.814325  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:02.933577  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:02.941754  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0917 00:27:03.126120  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:03.126151  859826 retry.go:31] will retry after 676.559292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:03.129380  859826 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0917 00:27:03.132225  859826 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0917 00:27:03.134944  859826 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 00:27:03.134972  859826 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 00:27:03.154584  859826 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 00:27:03.154608  859826 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 00:27:03.173966  859826 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 00:27:03.173990  859826 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 00:27:03.193595  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 00:27:03.303731  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:03.433045  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:03.441284  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:03.693334  859826 addons.go:479] Verifying addon gcp-auth=true in "addons-160127"
	I0917 00:27:03.696608  859826 out.go:179] * Verifying gcp-auth addon...
	I0917 00:27:03.700211  859826 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 00:27:03.708512  859826 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 00:27:03.708530  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:03.803236  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:27:03.805624  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:03.933246  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:03.941601  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:04.209385  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:04.304526  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:04.380275  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:04.433193  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:04.446263  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0917 00:27:04.618310  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:04.618342  859826 retry.go:31] will retry after 2.405015085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:04.703153  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:04.803369  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:04.932966  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:04.940804  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:05.208143  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:05.303474  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:05.433413  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:05.441110  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:05.703949  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:05.803267  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:05.933208  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:05.941057  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:06.206745  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:06.303716  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:06.432594  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:06.441288  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:06.703178  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:06.802966  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:06.880576  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:06.932662  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:06.941315  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:07.024499  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:27:07.210572  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:07.303404  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:07.433718  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:07.442201  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:07.706046  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:07.805841  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:07.829481  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:07.829519  859826 retry.go:31] will retry after 1.824863603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:07.933030  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:07.940807  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:08.209198  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:08.303219  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:08.432745  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:08.441586  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:08.703658  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:08.803686  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:08.933534  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:08.941085  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:09.207973  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:09.302730  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:09.379170  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:09.433001  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:09.440881  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:09.655168  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:27:09.703450  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:09.804313  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:09.933091  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:09.941224  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:10.209322  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:10.302978  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:10.432769  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:10.441702  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0917 00:27:10.453206  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:10.453238  859826 retry.go:31] will retry after 3.076951769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:10.703052  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:10.803470  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:10.932849  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:10.941936  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:11.207693  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:11.303556  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:11.379325  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:11.433182  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:11.440861  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:11.703615  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:11.803668  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:11.933343  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:11.940896  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:12.207964  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:12.303388  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:12.432580  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:12.441366  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:12.704666  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:12.807434  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:12.932955  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:12.940848  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:13.204129  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:13.303229  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:13.379704  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:13.433056  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:13.440980  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:13.531115  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:27:13.703871  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:13.803610  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:13.933321  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:13.942086  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:14.212413  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:14.303264  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:14.340259  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:14.340291  859826 retry.go:31] will retry after 6.729010463s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:14.432952  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:14.441053  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:14.703913  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:14.802903  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:14.933776  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:14.941689  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:15.207395  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:15.303441  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:15.433408  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:15.448233  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:15.703204  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:15.803640  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:15.879926  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:15.932735  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:15.941549  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:16.208719  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:16.302775  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:16.433139  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:16.440881  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:16.703842  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:16.802746  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:16.933608  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:16.941088  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:17.203900  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:17.303556  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:17.433208  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:17.440956  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:17.703844  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:17.802896  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:17.933255  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:17.941212  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:18.202912  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:18.302810  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:18.379613  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:18.433451  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:18.440952  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:18.703768  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:18.803864  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:18.934661  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:18.941248  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:19.203532  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:19.303497  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:19.433168  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:19.442167  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:19.702879  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:19.803072  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:19.933203  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:19.940949  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:20.207756  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:20.302601  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:20.433130  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:20.440676  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:20.703368  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:20.803461  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:20.879608  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:20.934035  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:20.940821  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:21.070044  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:27:21.210259  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:21.318639  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:21.433271  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:21.444614  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:21.703330  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:21.804280  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:21.932213  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0917 00:27:21.941356  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:21.941396  859826 retry.go:31] will retry after 8.561335791s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:21.942606  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:22.208044  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:22.302775  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:22.432885  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:22.441947  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:22.703803  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:22.802897  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:22.880007  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:22.932860  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:22.941742  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:23.207525  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:23.303659  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:23.433525  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:23.441415  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:23.703456  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:23.803708  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:23.933880  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:23.940897  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:24.207903  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:24.303597  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:24.432864  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:24.442095  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:24.703078  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:24.803238  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:24.932795  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:24.941780  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:25.207329  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:25.303378  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:25.378754  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:25.432320  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:25.441013  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:25.702996  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:25.803306  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:25.932910  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:25.941571  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:26.209531  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:26.309517  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:26.433138  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:26.440798  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:26.703853  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:26.802785  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:26.932642  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:26.941516  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:27.203551  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:27.303772  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:27.379185  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:27.433034  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:27.440550  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:27.704007  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:27.802744  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:27.933232  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:27.941750  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:28.208050  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:28.303343  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:28.432938  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:28.440831  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:28.703930  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:28.803118  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:28.932505  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:28.941348  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:29.203279  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:29.303275  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:29.432889  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:29.441704  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:29.703439  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:29.803832  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:29.879510  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:29.933240  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:29.940823  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:30.207463  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:30.303587  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:30.433074  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:30.440646  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:30.502885  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:27:30.703951  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:30.803084  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:30.933071  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:30.941353  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:31.203838  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:31.303886  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:31.341012  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:31.341087  859826 retry.go:31] will retry after 11.878017239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:31.432712  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:31.441946  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:31.704034  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:31.803465  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:31.933218  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:31.941368  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:32.207755  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:32.302940  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:32.379921  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:32.432614  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:32.441355  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:32.704313  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:32.803398  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:32.932630  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:32.941606  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:33.208930  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:33.303228  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:33.432996  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:33.440771  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:33.703566  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:33.802772  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:33.933186  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:33.940746  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:34.203695  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:34.303887  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:34.432212  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:34.441268  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:34.703265  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:34.803532  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:34.879083  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:34.932920  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:34.941659  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:35.203652  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:35.303876  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:35.432974  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:35.440755  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:35.703737  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:35.803535  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:35.933110  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:35.941239  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:36.207375  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:36.303247  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:36.433066  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:36.440771  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:36.703994  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:36.802716  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0917 00:27:36.881137  859826 node_ready.go:57] node "addons-160127" has "Ready":"False" status (will retry)
	I0917 00:27:36.933061  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:36.941235  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:37.236544  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:37.309155  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:37.412115  859826 node_ready.go:49] node "addons-160127" is "Ready"
	I0917 00:27:37.412192  859826 node_ready.go:38] duration metric: took 42.036041552s for node "addons-160127" to be "Ready" ...
	I0917 00:27:37.412231  859826 api_server.go:52] waiting for apiserver process to appear ...
	I0917 00:27:37.412322  859826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:27:37.438875  859826 api_server.go:72] duration metric: took 45.435331909s to wait for apiserver process to appear ...
	I0917 00:27:37.438948  859826 api_server.go:88] waiting for apiserver healthz status ...
	I0917 00:27:37.438981  859826 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 00:27:37.447858  859826 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 00:27:37.448863  859826 api_server.go:141] control plane version: v1.34.0
	I0917 00:27:37.448888  859826 api_server.go:131] duration metric: took 9.915726ms to wait for apiserver health ...
	I0917 00:27:37.448897  859826 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 00:27:37.484673  859826 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 00:27:37.484747  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:37.485138  859826 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 00:27:37.485192  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:37.485777  859826 system_pods.go:59] 19 kube-system pods found
	I0917 00:27:37.485837  859826 system_pods.go:61] "coredns-66bc5c9577-9nv5l" [51fbf1ff-717b-493e-944f-c573bc8ccfff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:27:37.485859  859826 system_pods.go:61] "csi-hostpath-attacher-0" [3ca1721a-41b2-4102-bbb1-1f68119943e0] Pending
	I0917 00:27:37.485883  859826 system_pods.go:61] "csi-hostpath-resizer-0" [d6ff80b5-a65d-452d-bb17-70393b953d3a] Pending
	I0917 00:27:37.485918  859826 system_pods.go:61] "csi-hostpathplugin-lqstz" [94e920de-6f16-4317-a028-597adfd8221e] Pending
	I0917 00:27:37.485934  859826 system_pods.go:61] "etcd-addons-160127" [6c043612-e545-46ce-aa90-09ecdcccb296] Running
	I0917 00:27:37.485954  859826 system_pods.go:61] "kindnet-pxkz8" [d2fca569-0a1a-423c-9362-afcc032bab4a] Running
	I0917 00:27:37.485987  859826 system_pods.go:61] "kube-apiserver-addons-160127" [f190316f-34a7-4e1e-9700-b62eb393ca4d] Running
	I0917 00:27:37.486009  859826 system_pods.go:61] "kube-controller-manager-addons-160127" [7ab21940-276e-4660-a5a4-bb01751bb362] Running
	I0917 00:27:37.486033  859826 system_pods.go:61] "kube-ingress-dns-minikube" [bc5407df-6d6a-419b-90c3-79837ee1ecbd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0917 00:27:37.486065  859826 system_pods.go:61] "kube-proxy-sgr9v" [c61127c6-ef26-45b0-89cd-b1b27d18f6bd] Running
	I0917 00:27:37.486088  859826 system_pods.go:61] "kube-scheduler-addons-160127" [ee47e001-3601-4eae-871e-08514cb5b909] Running
	I0917 00:27:37.486110  859826 system_pods.go:61] "metrics-server-85b7d694d7-m46l4" [6dc054c8-2180-40ae-a4e4-91c4c67ecb0a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 00:27:37.486143  859826 system_pods.go:61] "nvidia-device-plugin-daemonset-57955" [7915bd39-2026-4aa6-b307-af66b572bdb7] Pending
	I0917 00:27:37.486175  859826 system_pods.go:61] "registry-66898fdd98-s86g4" [716ddc93-7693-4193-8385-510c6f2a4e55] Pending
	I0917 00:27:37.486193  859826 system_pods.go:61] "registry-creds-764b6fb674-9kkp4" [f8c0354a-4c24-49ea-b341-794123fda53d] Pending
	I0917 00:27:37.486224  859826 system_pods.go:61] "registry-proxy-gnjgn" [ab0242cc-c5be-4bbe-afbf-29f473bc10b3] Pending
	I0917 00:27:37.486246  859826 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8xlwv" [e20e093f-7efe-48cb-abdc-5a6d1e15c061] Pending
	I0917 00:27:37.486270  859826 system_pods.go:61] "snapshot-controller-7d9fbc56b8-l5p4c" [867ed2e4-4e19-464d-8d08-e8cd2d1f03ad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 00:27:37.486304  859826 system_pods.go:61] "storage-provisioner" [28835ba0-23ea-467d-bc10-f9fd33cb2978] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:27:37.486326  859826 system_pods.go:74] duration metric: took 37.422339ms to wait for pod list to return data ...
	I0917 00:27:37.486347  859826 default_sa.go:34] waiting for default service account to be created ...
	I0917 00:27:37.524226  859826 default_sa.go:45] found service account: "default"
	I0917 00:27:37.524304  859826 default_sa.go:55] duration metric: took 37.935609ms for default service account to be created ...
	I0917 00:27:37.524328  859826 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 00:27:37.678153  859826 system_pods.go:86] 19 kube-system pods found
	I0917 00:27:37.678242  859826 system_pods.go:89] "coredns-66bc5c9577-9nv5l" [51fbf1ff-717b-493e-944f-c573bc8ccfff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:27:37.678265  859826 system_pods.go:89] "csi-hostpath-attacher-0" [3ca1721a-41b2-4102-bbb1-1f68119943e0] Pending
	I0917 00:27:37.678286  859826 system_pods.go:89] "csi-hostpath-resizer-0" [d6ff80b5-a65d-452d-bb17-70393b953d3a] Pending
	I0917 00:27:37.678409  859826 system_pods.go:89] "csi-hostpathplugin-lqstz" [94e920de-6f16-4317-a028-597adfd8221e] Pending
	I0917 00:27:37.678436  859826 system_pods.go:89] "etcd-addons-160127" [6c043612-e545-46ce-aa90-09ecdcccb296] Running
	I0917 00:27:37.678456  859826 system_pods.go:89] "kindnet-pxkz8" [d2fca569-0a1a-423c-9362-afcc032bab4a] Running
	I0917 00:27:37.678488  859826 system_pods.go:89] "kube-apiserver-addons-160127" [f190316f-34a7-4e1e-9700-b62eb393ca4d] Running
	I0917 00:27:37.678510  859826 system_pods.go:89] "kube-controller-manager-addons-160127" [7ab21940-276e-4660-a5a4-bb01751bb362] Running
	I0917 00:27:37.678534  859826 system_pods.go:89] "kube-ingress-dns-minikube" [bc5407df-6d6a-419b-90c3-79837ee1ecbd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0917 00:27:37.678567  859826 system_pods.go:89] "kube-proxy-sgr9v" [c61127c6-ef26-45b0-89cd-b1b27d18f6bd] Running
	I0917 00:27:37.678590  859826 system_pods.go:89] "kube-scheduler-addons-160127" [ee47e001-3601-4eae-871e-08514cb5b909] Running
	I0917 00:27:37.678612  859826 system_pods.go:89] "metrics-server-85b7d694d7-m46l4" [6dc054c8-2180-40ae-a4e4-91c4c67ecb0a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 00:27:37.678647  859826 system_pods.go:89] "nvidia-device-plugin-daemonset-57955" [7915bd39-2026-4aa6-b307-af66b572bdb7] Pending
	I0917 00:27:37.678673  859826 system_pods.go:89] "registry-66898fdd98-s86g4" [716ddc93-7693-4193-8385-510c6f2a4e55] Pending
	I0917 00:27:37.678695  859826 system_pods.go:89] "registry-creds-764b6fb674-9kkp4" [f8c0354a-4c24-49ea-b341-794123fda53d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0917 00:27:37.678729  859826 system_pods.go:89] "registry-proxy-gnjgn" [ab0242cc-c5be-4bbe-afbf-29f473bc10b3] Pending
	I0917 00:27:37.678752  859826 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8xlwv" [e20e093f-7efe-48cb-abdc-5a6d1e15c061] Pending
	I0917 00:27:37.678776  859826 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5p4c" [867ed2e4-4e19-464d-8d08-e8cd2d1f03ad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 00:27:37.678810  859826 system_pods.go:89] "storage-provisioner" [28835ba0-23ea-467d-bc10-f9fd33cb2978] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:27:37.678844  859826 retry.go:31] will retry after 201.083526ms: missing components: kube-dns
	I0917 00:27:37.721633  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:37.825003  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:37.917563  859826 system_pods.go:86] 19 kube-system pods found
	I0917 00:27:37.917650  859826 system_pods.go:89] "coredns-66bc5c9577-9nv5l" [51fbf1ff-717b-493e-944f-c573bc8ccfff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0917 00:27:37.917673  859826 system_pods.go:89] "csi-hostpath-attacher-0" [3ca1721a-41b2-4102-bbb1-1f68119943e0] Pending
	I0917 00:27:37.917696  859826 system_pods.go:89] "csi-hostpath-resizer-0" [d6ff80b5-a65d-452d-bb17-70393b953d3a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 00:27:37.917737  859826 system_pods.go:89] "csi-hostpathplugin-lqstz" [94e920de-6f16-4317-a028-597adfd8221e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 00:27:37.917757  859826 system_pods.go:89] "etcd-addons-160127" [6c043612-e545-46ce-aa90-09ecdcccb296] Running
	I0917 00:27:37.917778  859826 system_pods.go:89] "kindnet-pxkz8" [d2fca569-0a1a-423c-9362-afcc032bab4a] Running
	I0917 00:27:37.917809  859826 system_pods.go:89] "kube-apiserver-addons-160127" [f190316f-34a7-4e1e-9700-b62eb393ca4d] Running
	I0917 00:27:37.917833  859826 system_pods.go:89] "kube-controller-manager-addons-160127" [7ab21940-276e-4660-a5a4-bb01751bb362] Running
	I0917 00:27:37.917859  859826 system_pods.go:89] "kube-ingress-dns-minikube" [bc5407df-6d6a-419b-90c3-79837ee1ecbd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0917 00:27:37.917895  859826 system_pods.go:89] "kube-proxy-sgr9v" [c61127c6-ef26-45b0-89cd-b1b27d18f6bd] Running
	I0917 00:27:37.917921  859826 system_pods.go:89] "kube-scheduler-addons-160127" [ee47e001-3601-4eae-871e-08514cb5b909] Running
	I0917 00:27:37.917942  859826 system_pods.go:89] "metrics-server-85b7d694d7-m46l4" [6dc054c8-2180-40ae-a4e4-91c4c67ecb0a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 00:27:37.917976  859826 system_pods.go:89] "nvidia-device-plugin-daemonset-57955" [7915bd39-2026-4aa6-b307-af66b572bdb7] Pending
	I0917 00:27:37.918005  859826 system_pods.go:89] "registry-66898fdd98-s86g4" [716ddc93-7693-4193-8385-510c6f2a4e55] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 00:27:37.918026  859826 system_pods.go:89] "registry-creds-764b6fb674-9kkp4" [f8c0354a-4c24-49ea-b341-794123fda53d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0917 00:27:37.918061  859826 system_pods.go:89] "registry-proxy-gnjgn" [ab0242cc-c5be-4bbe-afbf-29f473bc10b3] Pending
	I0917 00:27:37.918087  859826 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8xlwv" [e20e093f-7efe-48cb-abdc-5a6d1e15c061] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 00:27:37.918112  859826 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5p4c" [867ed2e4-4e19-464d-8d08-e8cd2d1f03ad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 00:27:37.918151  859826 system_pods.go:89] "storage-provisioner" [28835ba0-23ea-467d-bc10-f9fd33cb2978] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:27:37.918186  859826 retry.go:31] will retry after 322.107486ms: missing components: kube-dns
	I0917 00:27:37.947936  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:37.956464  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:38.231518  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:38.250587  859826 system_pods.go:86] 19 kube-system pods found
	I0917 00:27:38.250672  859826 system_pods.go:89] "coredns-66bc5c9577-9nv5l" [51fbf1ff-717b-493e-944f-c573bc8ccfff] Running
	I0917 00:27:38.250697  859826 system_pods.go:89] "csi-hostpath-attacher-0" [3ca1721a-41b2-4102-bbb1-1f68119943e0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 00:27:38.250739  859826 system_pods.go:89] "csi-hostpath-resizer-0" [d6ff80b5-a65d-452d-bb17-70393b953d3a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0917 00:27:38.250764  859826 system_pods.go:89] "csi-hostpathplugin-lqstz" [94e920de-6f16-4317-a028-597adfd8221e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 00:27:38.250783  859826 system_pods.go:89] "etcd-addons-160127" [6c043612-e545-46ce-aa90-09ecdcccb296] Running
	I0917 00:27:38.250816  859826 system_pods.go:89] "kindnet-pxkz8" [d2fca569-0a1a-423c-9362-afcc032bab4a] Running
	I0917 00:27:38.250841  859826 system_pods.go:89] "kube-apiserver-addons-160127" [f190316f-34a7-4e1e-9700-b62eb393ca4d] Running
	I0917 00:27:38.250861  859826 system_pods.go:89] "kube-controller-manager-addons-160127" [7ab21940-276e-4660-a5a4-bb01751bb362] Running
	I0917 00:27:38.250896  859826 system_pods.go:89] "kube-ingress-dns-minikube" [bc5407df-6d6a-419b-90c3-79837ee1ecbd] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0917 00:27:38.250920  859826 system_pods.go:89] "kube-proxy-sgr9v" [c61127c6-ef26-45b0-89cd-b1b27d18f6bd] Running
	I0917 00:27:38.250944  859826 system_pods.go:89] "kube-scheduler-addons-160127" [ee47e001-3601-4eae-871e-08514cb5b909] Running
	I0917 00:27:38.250988  859826 system_pods.go:89] "metrics-server-85b7d694d7-m46l4" [6dc054c8-2180-40ae-a4e4-91c4c67ecb0a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 00:27:38.251015  859826 system_pods.go:89] "nvidia-device-plugin-daemonset-57955" [7915bd39-2026-4aa6-b307-af66b572bdb7] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0917 00:27:38.251053  859826 system_pods.go:89] "registry-66898fdd98-s86g4" [716ddc93-7693-4193-8385-510c6f2a4e55] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 00:27:38.251078  859826 system_pods.go:89] "registry-creds-764b6fb674-9kkp4" [f8c0354a-4c24-49ea-b341-794123fda53d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0917 00:27:38.251100  859826 system_pods.go:89] "registry-proxy-gnjgn" [ab0242cc-c5be-4bbe-afbf-29f473bc10b3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 00:27:38.251134  859826 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8xlwv" [e20e093f-7efe-48cb-abdc-5a6d1e15c061] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 00:27:38.251157  859826 system_pods.go:89] "snapshot-controller-7d9fbc56b8-l5p4c" [867ed2e4-4e19-464d-8d08-e8cd2d1f03ad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 00:27:38.251185  859826 system_pods.go:89] "storage-provisioner" [28835ba0-23ea-467d-bc10-f9fd33cb2978] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0917 00:27:38.251223  859826 system_pods.go:126] duration metric: took 726.876246ms to wait for k8s-apps to be running ...
	I0917 00:27:38.251248  859826 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 00:27:38.251337  859826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:27:38.297279  859826 system_svc.go:56] duration metric: took 46.021657ms WaitForService to wait for kubelet
	I0917 00:27:38.297357  859826 kubeadm.go:578] duration metric: took 46.293818017s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:27:38.297392  859826 node_conditions.go:102] verifying NodePressure condition ...
	I0917 00:27:38.338232  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:38.361672  859826 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0917 00:27:38.361755  859826 node_conditions.go:123] node cpu capacity is 2
	I0917 00:27:38.361793  859826 node_conditions.go:105] duration metric: took 64.379923ms to run NodePressure ...
	I0917 00:27:38.361834  859826 start.go:241] waiting for startup goroutines ...
	I0917 00:27:38.434583  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:38.442476  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:38.703429  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:38.803655  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:38.933040  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:38.941133  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:39.204917  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:39.303104  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:39.432749  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:39.441727  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:39.703491  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:39.803241  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:39.933725  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:39.942829  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:40.212844  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:40.302837  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:40.433246  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:40.442061  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:40.703333  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:40.804141  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:40.933822  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:40.942325  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:41.209849  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:41.303731  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:41.432907  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:41.441092  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:41.703396  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:41.803413  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:41.933479  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:41.944348  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:42.207910  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:42.303819  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:42.433222  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:42.441337  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:42.703187  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:42.803195  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:42.933781  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:42.942652  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:43.207341  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:43.219565  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:27:43.304276  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:43.433249  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:43.442180  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:43.704683  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:43.804308  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:43.934782  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:43.942073  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:44.208903  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:44.303320  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:44.363652  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.144044515s)
	W0917 00:27:44.363739  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:44.363772  859826 retry.go:31] will retry after 19.191680883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:27:44.433350  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:44.442085  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:44.703493  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:44.804824  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:44.935469  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:44.941527  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:45.219161  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:45.305731  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:45.435825  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:45.444545  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:45.703676  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:45.804027  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:45.935316  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:45.942722  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:46.212782  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:46.312765  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:46.432728  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:46.442336  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:46.703680  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:46.804300  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:46.936739  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:46.942128  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:47.205958  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:47.304890  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:47.433352  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:47.441997  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:47.703356  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:47.803807  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:47.933101  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:47.941863  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:48.205741  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:48.305168  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:48.433831  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:48.442731  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:48.703848  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:48.804035  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:48.933438  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:48.941903  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:49.219630  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:49.303454  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:49.433572  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:49.442165  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:49.704234  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:49.803287  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:49.934011  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:49.942141  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:50.205853  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:50.306808  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:50.432833  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:50.440852  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:50.704874  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:50.807640  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:50.932993  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:50.942955  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:51.207076  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:51.303690  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:51.433841  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:51.450363  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:51.703454  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:51.803811  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:51.933501  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:51.951596  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:52.207314  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:52.305254  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:52.441073  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:52.448439  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:52.703986  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:52.806828  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:52.932765  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:52.942756  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:53.210733  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:53.304437  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:53.433862  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:53.441406  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:53.703928  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:53.806155  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:53.933221  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:53.941339  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:54.208579  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:54.303681  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:54.438179  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:54.442007  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:54.704544  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:54.804883  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:54.934580  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:54.942058  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:55.204375  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:55.303478  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:55.433309  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:55.441659  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:55.704068  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:55.803861  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:55.933570  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:55.942008  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:56.207393  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:56.303642  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:56.432922  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:56.441565  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:56.704040  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:56.806626  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:56.933385  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:56.942580  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:57.209266  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:57.303549  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:57.434517  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:57.441653  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:57.703822  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:57.814331  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:57.934515  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:57.943018  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:58.214063  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:58.302813  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:58.434781  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:58.440851  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:58.703926  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:58.803265  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:58.933234  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:58.941119  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:59.209044  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:59.310759  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:59.432999  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:59.441054  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:27:59.723925  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:27:59.820203  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:27:59.933102  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:27:59.940849  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:00.237887  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:00.328277  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:00.449508  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:00.450111  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:00.708686  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:00.804747  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:00.933634  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:00.942283  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:01.208492  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:01.304491  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:01.433460  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:01.442140  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:01.716955  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:01.803189  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:01.933237  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:01.941465  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:02.209974  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:02.303095  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:02.434262  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:02.441909  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:02.711807  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:02.827194  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:02.933505  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:02.941578  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:03.212050  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:03.303488  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:03.433245  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:03.441848  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:03.556254  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:28:03.726163  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:03.803725  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:03.933856  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:03.941703  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:04.213031  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:04.302926  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:04.435068  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:04.450764  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0917 00:28:04.502496  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:28:04.502576  859826 retry.go:31] will retry after 17.255842651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:28:04.703484  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:04.804022  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:04.933125  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:04.941808  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:05.217010  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:05.304719  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:05.435613  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:05.443081  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:05.708281  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:05.803689  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:05.932977  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:05.941020  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:06.207721  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:06.304264  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:06.433904  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:06.442017  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:06.704272  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:06.803314  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:06.933722  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:06.942379  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:07.210237  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:07.311158  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:07.434101  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:07.441194  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:07.712779  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:07.811192  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:07.933347  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:07.941314  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:08.207094  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:08.303169  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:08.433145  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:08.441657  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:08.703691  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:08.803630  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:08.933085  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:08.941030  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:09.206947  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:09.303118  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:09.433606  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:09.441976  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:09.704111  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:09.818571  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:09.933837  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:09.941653  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:10.208157  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:10.303143  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:10.433405  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:10.442715  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:10.705498  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:10.804020  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:10.933585  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:10.942730  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:11.207265  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:11.304321  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:11.438601  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:11.449413  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:11.703935  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:11.805532  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:11.945699  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:11.958882  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:12.205122  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:12.303909  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:12.433393  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:12.441969  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:12.703857  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:12.804705  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:12.933126  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:12.941933  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:13.207031  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:13.303562  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:13.433903  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:13.442520  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:13.703873  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:13.804202  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:13.933952  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:13.941526  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:14.210629  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:14.304450  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:14.432626  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:14.452868  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:14.704709  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:14.803053  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:14.934081  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:14.942611  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:15.221135  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:15.304622  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:15.434018  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:15.442183  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:15.704966  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:15.803134  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:15.934083  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:15.941626  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:16.209385  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:16.305230  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:16.433626  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:16.442302  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:16.704254  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:16.803545  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:16.933633  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:16.942072  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:17.206774  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:17.307213  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:17.433133  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:17.441984  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:17.703548  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:17.803606  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:17.937362  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:17.941913  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:18.211622  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:18.304507  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:18.432948  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:18.441136  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:18.703659  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:18.803167  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:18.935183  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:18.941484  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:19.216781  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:19.303263  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:19.434195  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:19.452833  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:19.704253  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:19.803407  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:19.948793  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:19.949685  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:20.204622  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:20.306844  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:20.439189  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:20.448072  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:20.706019  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:20.807687  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:20.948277  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:20.948372  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:21.211640  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:21.304027  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:21.435065  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:21.445177  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:21.704224  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:21.759568  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:28:21.812882  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:21.949729  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:21.964962  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:22.213616  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:22.314396  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:22.436224  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:22.452233  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:22.716792  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:22.808073  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:22.937415  859826 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.177755262s)
	W0917 00:28:22.937452  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:28:22.937472  859826 retry.go:31] will retry after 26.051938222s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0917 00:28:22.945747  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:22.947360  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:23.213774  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:23.304787  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:23.433608  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:23.444918  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:23.705829  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:23.803487  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:23.934654  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:23.944230  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:24.220263  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:24.303848  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:24.436658  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:24.442322  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:24.705529  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:24.807351  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:24.947594  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:24.950360  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:25.208732  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:25.303492  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:25.432609  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:25.442086  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:25.703559  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:25.803966  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:25.933469  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:25.943008  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:26.204404  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:26.303694  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:26.432963  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:26.441431  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:26.703751  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:26.803136  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:26.933675  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:26.942302  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:27.204024  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:27.303487  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:27.433903  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:27.441429  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:27.704093  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:27.803316  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:27.933845  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:27.942206  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:28.217301  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:28.303800  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:28.433024  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:28.441799  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:28.703892  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:28.802943  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:28.932982  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 00:28:28.941907  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:29.211386  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:29.305367  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:29.435240  859826 kapi.go:107] duration metric: took 1m32.005689282s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 00:28:29.442421  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:29.713251  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:29.805807  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:29.946534  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:30.220211  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:30.303858  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:30.442111  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:30.704420  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:30.804881  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:30.947268  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:31.208759  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:31.304424  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:31.442009  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:31.702900  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:31.803002  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:31.941412  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:32.210386  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:32.303864  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:32.441503  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:32.704477  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:32.818165  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:32.943473  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:33.205375  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:33.304540  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:33.452335  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:33.714310  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:33.804424  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:33.960584  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:34.220574  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:34.325870  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:34.445182  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:34.704664  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:34.803608  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:34.941981  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:35.214715  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:35.303602  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:35.442676  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:35.704679  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:35.804339  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:35.942274  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:36.211722  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:36.308453  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:36.441769  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:36.710277  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:36.808495  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:36.944018  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:37.212039  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:37.307050  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:37.442906  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:37.704000  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:37.803476  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:37.942705  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:38.205491  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:38.303913  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:38.441466  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:38.703286  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:38.804534  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:38.942146  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:39.207971  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:39.303624  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:39.467972  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:39.704075  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:39.803217  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:39.941972  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:40.228509  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:40.304401  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:40.441561  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:40.704729  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:40.805136  859826 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 00:28:40.948592  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:41.204746  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:41.303785  859826 kapi.go:107] duration metric: took 1m43.003971047s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 00:28:41.443071  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:41.704588  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:41.945327  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:42.291851  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:42.442857  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:42.704331  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:42.941952  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:43.204103  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:43.441755  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:43.703697  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:43.943711  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:44.205567  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:44.440799  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:44.704193  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:44.941747  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:45.215619  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:45.441624  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:45.704494  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:45.942717  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:46.209256  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:46.441829  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:46.707896  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:46.984040  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:47.216189  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:47.442820  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:47.704084  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 00:28:47.941909  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:48.224082  859826 kapi.go:107] duration metric: took 1m44.523867028s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 00:28:48.227111  859826 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-160127 cluster.
	I0917 00:28:48.230004  859826 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 00:28:48.232898  859826 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 00:28:48.442673  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:48.941831  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:48.989861  859826 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0917 00:28:49.442084  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0917 00:28:49.929959  859826 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0917 00:28:49.930051  859826 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0917 00:28:49.944179  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:50.441873  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:50.942690  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:51.441771  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:51.942589  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:52.441993  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:52.942736  859826 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 00:28:53.441137  859826 kapi.go:107] duration metric: took 1m54.50330763s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 00:28:53.444409  859826 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0917 00:28:53.447232  859826 addons.go:514] duration metric: took 2m1.4437884s for enable addons: enabled=[registry-creds amd-gpu-device-plugin cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0917 00:28:53.447282  859826 start.go:246] waiting for cluster config update ...
	I0917 00:28:53.447305  859826 start.go:255] writing updated cluster config ...
	I0917 00:28:53.447604  859826 ssh_runner.go:195] Run: rm -f paused
	I0917 00:28:53.452286  859826 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:28:53.455776  859826 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9nv5l" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:28:53.461666  859826 pod_ready.go:94] pod "coredns-66bc5c9577-9nv5l" is "Ready"
	I0917 00:28:53.461700  859826 pod_ready.go:86] duration metric: took 5.866835ms for pod "coredns-66bc5c9577-9nv5l" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:28:53.542005  859826 pod_ready.go:83] waiting for pod "etcd-addons-160127" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:28:53.546884  859826 pod_ready.go:94] pod "etcd-addons-160127" is "Ready"
	I0917 00:28:53.546917  859826 pod_ready.go:86] duration metric: took 4.882428ms for pod "etcd-addons-160127" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:28:53.549596  859826 pod_ready.go:83] waiting for pod "kube-apiserver-addons-160127" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:28:53.554386  859826 pod_ready.go:94] pod "kube-apiserver-addons-160127" is "Ready"
	I0917 00:28:53.554411  859826 pod_ready.go:86] duration metric: took 4.787058ms for pod "kube-apiserver-addons-160127" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:28:53.556735  859826 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-160127" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:28:53.856801  859826 pod_ready.go:94] pod "kube-controller-manager-addons-160127" is "Ready"
	I0917 00:28:53.856879  859826 pod_ready.go:86] duration metric: took 300.066351ms for pod "kube-controller-manager-addons-160127" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:28:54.055963  859826 pod_ready.go:83] waiting for pod "kube-proxy-sgr9v" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:28:54.455709  859826 pod_ready.go:94] pod "kube-proxy-sgr9v" is "Ready"
	I0917 00:28:54.455738  859826 pod_ready.go:86] duration metric: took 399.750646ms for pod "kube-proxy-sgr9v" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:28:54.657041  859826 pod_ready.go:83] waiting for pod "kube-scheduler-addons-160127" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:28:55.057016  859826 pod_ready.go:94] pod "kube-scheduler-addons-160127" is "Ready"
	I0917 00:28:55.057041  859826 pod_ready.go:86] duration metric: took 399.969922ms for pod "kube-scheduler-addons-160127" in "kube-system" namespace to be "Ready" or be gone ...
	I0917 00:28:55.057054  859826 pod_ready.go:40] duration metric: took 1.604737846s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0917 00:28:55.296748  859826 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0917 00:28:55.307881  859826 out.go:179] * Done! kubectl is now configured to use "addons-160127" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 17 00:31:47 addons-160127 crio[994]: time="2025-09-17 00:31:47.336426049Z" level=info msg="Removed pod sandbox: f848c4560de558eb9c94ec24cb6f3e7970a452056de148201025b177b02cdab4" id=56cdc1a6-4c50-4b18-8c3e-6964c230ac0d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 17 00:33:02 addons-160127 crio[994]: time="2025-09-17 00:33:02.257754902Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-kpnxw/POD" id=41c748c1-45fc-4967-bb4f-2cb5697d22f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 17 00:33:02 addons-160127 crio[994]: time="2025-09-17 00:33:02.258504680Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:33:02 addons-160127 crio[994]: time="2025-09-17 00:33:02.294959854Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-kpnxw Namespace:default ID:2f3360586bb301cd0e7096a282cf2bf62bba5988b0a624ce11504625708109ae UID:52f07d1d-47a5-41b9-9b8f-06539806f2db NetNS:/var/run/netns/ed4e877b-378b-404d-8bbf-0b1074094de5 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 00:33:02 addons-160127 crio[994]: time="2025-09-17 00:33:02.295152628Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-kpnxw to CNI network \"kindnet\" (type=ptp)"
	Sep 17 00:33:02 addons-160127 crio[994]: time="2025-09-17 00:33:02.340129788Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-kpnxw Namespace:default ID:2f3360586bb301cd0e7096a282cf2bf62bba5988b0a624ce11504625708109ae UID:52f07d1d-47a5-41b9-9b8f-06539806f2db NetNS:/var/run/netns/ed4e877b-378b-404d-8bbf-0b1074094de5 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 17 00:33:02 addons-160127 crio[994]: time="2025-09-17 00:33:02.340282209Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-kpnxw for CNI network kindnet (type=ptp)"
	Sep 17 00:33:02 addons-160127 crio[994]: time="2025-09-17 00:33:02.343914668Z" level=info msg="Ran pod sandbox 2f3360586bb301cd0e7096a282cf2bf62bba5988b0a624ce11504625708109ae with infra container: default/hello-world-app-5d498dc89-kpnxw/POD" id=41c748c1-45fc-4967-bb4f-2cb5697d22f7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 17 00:33:02 addons-160127 crio[994]: time="2025-09-17 00:33:02.346493411Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=511ee33c-3fa6-4248-9a75-22c7e990542b name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:02 addons-160127 crio[994]: time="2025-09-17 00:33:02.346726382Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=511ee33c-3fa6-4248-9a75-22c7e990542b name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:02 addons-160127 crio[994]: time="2025-09-17 00:33:02.347703118Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=7685d4d0-5fe2-4d36-823f-76d3c01690da name=/runtime.v1.ImageService/PullImage
	Sep 17 00:33:02 addons-160127 crio[994]: time="2025-09-17 00:33:02.350058030Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 17 00:33:02 addons-160127 crio[994]: time="2025-09-17 00:33:02.587709732Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 17 00:33:03 addons-160127 crio[994]: time="2025-09-17 00:33:03.375445129Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=7685d4d0-5fe2-4d36-823f-76d3c01690da name=/runtime.v1.ImageService/PullImage
	Sep 17 00:33:03 addons-160127 crio[994]: time="2025-09-17 00:33:03.376415931Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=81dbf717-72b9-4d87-bda9-39bf292883d5 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:03 addons-160127 crio[994]: time="2025-09-17 00:33:03.378442289Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=81dbf717-72b9-4d87-bda9-39bf292883d5 name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:03 addons-160127 crio[994]: time="2025-09-17 00:33:03.379657986Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=cfe06abe-45c8-42cd-bec3-ebbf38f63acf name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:03 addons-160127 crio[994]: time="2025-09-17 00:33:03.380242740Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=cfe06abe-45c8-42cd-bec3-ebbf38f63acf name=/runtime.v1.ImageService/ImageStatus
	Sep 17 00:33:03 addons-160127 crio[994]: time="2025-09-17 00:33:03.385366962Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-kpnxw/hello-world-app" id=2dd6e30d-6554-4fc2-ac3e-400321a843c0 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:33:03 addons-160127 crio[994]: time="2025-09-17 00:33:03.385460698Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 17 00:33:03 addons-160127 crio[994]: time="2025-09-17 00:33:03.416943489Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/88916c6062d1173b926f6a4b420345a92cccb599f74085a3f5215f0605d3eda8/merged/etc/passwd: no such file or directory"
	Sep 17 00:33:03 addons-160127 crio[994]: time="2025-09-17 00:33:03.417135754Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/88916c6062d1173b926f6a4b420345a92cccb599f74085a3f5215f0605d3eda8/merged/etc/group: no such file or directory"
	Sep 17 00:33:03 addons-160127 crio[994]: time="2025-09-17 00:33:03.485864203Z" level=info msg="Created container 5005b7aff94b7113aedf14d6693c06b1faa10050f66fa9a1e44fb20f1c8f6b8f: default/hello-world-app-5d498dc89-kpnxw/hello-world-app" id=2dd6e30d-6554-4fc2-ac3e-400321a843c0 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 17 00:33:03 addons-160127 crio[994]: time="2025-09-17 00:33:03.486715406Z" level=info msg="Starting container: 5005b7aff94b7113aedf14d6693c06b1faa10050f66fa9a1e44fb20f1c8f6b8f" id=a1deddea-a79c-47cf-89d6-45a15622f948 name=/runtime.v1.RuntimeService/StartContainer
	Sep 17 00:33:03 addons-160127 crio[994]: time="2025-09-17 00:33:03.503169505Z" level=info msg="Started container" PID=10154 containerID=5005b7aff94b7113aedf14d6693c06b1faa10050f66fa9a1e44fb20f1c8f6b8f description=default/hello-world-app-5d498dc89-kpnxw/hello-world-app id=a1deddea-a79c-47cf-89d6-45a15622f948 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f3360586bb301cd0e7096a282cf2bf62bba5988b0a624ce11504625708109ae
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	5005b7aff94b7       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   2f3360586bb30       hello-world-app-5d498dc89-kpnxw
	4e5d559c64cf4       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   1960cd537e9ba       nginx
	ae0984876fc06       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago            Running             busybox                   0                   c64d831704277       busybox
	183d2f9f9597b       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             4 minutes ago            Running             controller                0                   7c03dc3882d18       ingress-nginx-controller-9cc49f96f-5tdpz
	a486d97e83b24       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            4 minutes ago            Running             gadget                    0                   199ee7ecd41f4       gadget-qgjkm
	32c91f9aaadc9       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                             4 minutes ago            Exited              patch                     2                   05a9572df33ae       ingress-nginx-admission-patch-q77zt
	a6eb31c4bac99       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   5 minutes ago            Exited              create                    0                   e0120a37712a8       ingress-nginx-admission-create-w4bbl
	fe350e3c2cc66       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958               5 minutes ago            Running             minikube-ingress-dns      0                   a73717d3b9180       kube-ingress-dns-minikube
	383369be6bbc9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                             5 minutes ago            Running             coredns                   0                   1b8ac32d91f7d       coredns-66bc5c9577-9nv5l
	be793dccdb22e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago            Running             storage-provisioner       0                   e7282ff528e99       storage-provisioner
	4bd3f29366f22       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                             6 minutes ago            Running             kindnet-cni               0                   1bfb6415beb9a       kindnet-pxkz8
	eeca9e39f37bc       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                                             6 minutes ago            Running             kube-proxy                0                   796d6634ced24       kube-proxy-sgr9v
	2d2c7b707b15a       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                                             6 minutes ago            Running             kube-scheduler            0                   0039c96ae681a       kube-scheduler-addons-160127
	b3f4469924d13       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                                             6 minutes ago            Running             kube-controller-manager   0                   263cad883c054       kube-controller-manager-addons-160127
	ed26be7932892       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                             6 minutes ago            Running             etcd                      0                   dd431765fa0ad       etcd-addons-160127
	504c0fce83d21       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                                             6 minutes ago            Running             kube-apiserver            0                   080e39a6959ab       kube-apiserver-addons-160127
	
	
	==> coredns [383369be6bbc9a089e6d2ceef4e7b1bac800d0a4aa629a27ea9e2f8de20c5a07] <==
	[INFO] 10.244.0.16:35546 - 26427 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004071168s
	[INFO] 10.244.0.16:35546 - 63025 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000736863s
	[INFO] 10.244.0.16:35546 - 45908 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000933994s
	[INFO] 10.244.0.16:39871 - 52197 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000163744s
	[INFO] 10.244.0.16:39871 - 51728 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000267244s
	[INFO] 10.244.0.16:34775 - 16718 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108154s
	[INFO] 10.244.0.16:34775 - 16521 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114447s
	[INFO] 10.244.0.16:39576 - 26122 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092588s
	[INFO] 10.244.0.16:39576 - 25925 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00015503s
	[INFO] 10.244.0.16:48627 - 64958 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001126021s
	[INFO] 10.244.0.16:48627 - 65171 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001301063s
	[INFO] 10.244.0.16:58524 - 10941 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000128207s
	[INFO] 10.244.0.16:58524 - 10537 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092021s
	[INFO] 10.244.0.21:33973 - 57041 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000233373s
	[INFO] 10.244.0.21:59131 - 52139 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000185127s
	[INFO] 10.244.0.21:60362 - 22467 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000155177s
	[INFO] 10.244.0.21:54698 - 50244 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000072625s
	[INFO] 10.244.0.21:35961 - 23838 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000163268s
	[INFO] 10.244.0.21:42628 - 5287 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000071763s
	[INFO] 10.244.0.21:57208 - 58487 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002772s
	[INFO] 10.244.0.21:38718 - 26252 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00263805s
	[INFO] 10.244.0.21:54403 - 48729 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002041529s
	[INFO] 10.244.0.21:60002 - 15662 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003186307s
	[INFO] 10.244.0.23:45111 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000166057s
	[INFO] 10.244.0.23:33640 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000104305s
	
	
	==> describe nodes <==
	Name:               addons-160127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-160127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9829f0bc17c523e4378d28e0c25741106f24f00a
	                    minikube.k8s.io/name=addons-160127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_17T00_26_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-160127
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Sep 2025 00:26:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-160127
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Sep 2025 00:32:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Sep 2025 00:30:51 +0000   Wed, 17 Sep 2025 00:26:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Sep 2025 00:30:51 +0000   Wed, 17 Sep 2025 00:26:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Sep 2025 00:30:51 +0000   Wed, 17 Sep 2025 00:26:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Sep 2025 00:30:51 +0000   Wed, 17 Sep 2025 00:27:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-160127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 7fe5211217954556bd7e0a318dc98a9f
	  System UUID:                7c001af9-5cfe-4302-bf9e-915706d96358
	  Boot ID:                    6b076a96-9a4c-4fa4-bd00-8a6e573f8463
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  default                     hello-world-app-5d498dc89-kpnxw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-qgjkm                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-5tdpz    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m6s
	  kube-system                 coredns-66bc5c9577-9nv5l                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m12s
	  kube-system                 etcd-addons-160127                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m18s
	  kube-system                 kindnet-pxkz8                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m11s
	  kube-system                 kube-apiserver-addons-160127                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-controller-manager-addons-160127       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-proxy-sgr9v                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	  kube-system                 kube-scheduler-addons-160127                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m6s                   kube-proxy       
	  Normal   Starting                 6m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m25s (x8 over 6m25s)  kubelet          Node addons-160127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m25s (x8 over 6m25s)  kubelet          Node addons-160127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m25s (x8 over 6m25s)  kubelet          Node addons-160127 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m18s                  kubelet          Node addons-160127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m18s                  kubelet          Node addons-160127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m18s                  kubelet          Node addons-160127 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m13s                  node-controller  Node addons-160127 event: Registered Node addons-160127 in Controller
	  Normal   NodeReady                5m27s                  kubelet          Node addons-160127 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep16 23:35] systemd-journald[216]: Failed to send stream file descriptor to service manager: Connection refused
	[Sep17 00:19] systemd-journald[221]: Failed to send stream file descriptor to service manager: Connection refused
	[Sep17 00:24] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [ed26be79328922f28c021e68379452ccb0f1942d5f9d43a219afaa9fec9f94e4] <==
	{"level":"warn","ts":"2025-09-17T00:26:42.875583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:26:43.020933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36550","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-17T00:26:52.633614Z","caller":"traceutil/trace.go:172","msg":"trace[1157973626] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"117.634252ms","start":"2025-09-17T00:26:52.515963Z","end":"2025-09-17T00:26:52.633597Z","steps":["trace[1157973626] 'process raft request'  (duration: 79.236953ms)","trace[1157973626] 'compare'  (duration: 38.177932ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:26:54.950042Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.261019ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2025-09-17T00:26:54.950424Z","caller":"traceutil/trace.go:172","msg":"trace[1588324792] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:384; }","duration":"138.670189ms","start":"2025-09-17T00:26:54.811740Z","end":"2025-09-17T00:26:54.950410Z","steps":["trace[1588324792] 'range keys from in-memory index tree'  (duration: 138.03302ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:26:56.145783Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.046029ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:26:56.145852Z","caller":"traceutil/trace.go:172","msg":"trace[2002940089] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:398; }","duration":"106.123421ms","start":"2025-09-17T00:26:56.039717Z","end":"2025-09-17T00:26:56.145840Z","steps":["trace[2002940089] 'agreement among raft nodes before linearized reading'  (duration: 106.020093ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:26:56.168868Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.771848ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-160127\" limit:1 ","response":"range_response_count:1 size:5601"}
	{"level":"info","ts":"2025-09-17T00:26:56.168935Z","caller":"traceutil/trace.go:172","msg":"trace[2144293595] range","detail":"{range_begin:/registry/minions/addons-160127; range_end:; response_count:1; response_revision:398; }","duration":"106.849486ms","start":"2025-09-17T00:26:56.062073Z","end":"2025-09-17T00:26:56.168922Z","steps":["trace[2144293595] 'agreement among raft nodes before linearized reading'  (duration: 106.613446ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:26:56.183824Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.533536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:26:56.184336Z","caller":"traceutil/trace.go:172","msg":"trace[1488851565] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:0; response_revision:398; }","duration":"127.251865ms","start":"2025-09-17T00:26:56.057071Z","end":"2025-09-17T00:26:56.184323Z","steps":["trace[1488851565] 'agreement among raft nodes before linearized reading'  (duration: 126.484905ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:26:56.185618Z","caller":"traceutil/trace.go:172","msg":"trace[1991964952] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"121.257594ms","start":"2025-09-17T00:26:56.064346Z","end":"2025-09-17T00:26:56.185604Z","steps":["trace[1991964952] 'process raft request'  (duration: 119.651904ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-17T00:26:56.187567Z","caller":"traceutil/trace.go:172","msg":"trace[689176810] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"104.979316ms","start":"2025-09-17T00:26:56.082555Z","end":"2025-09-17T00:26:56.187534Z","steps":["trace[689176810] 'process raft request'  (duration: 101.509313ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:26:56.198793Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.096918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-09-17T00:26:56.199018Z","caller":"traceutil/trace.go:172","msg":"trace[418724122] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:400; }","duration":"142.334533ms","start":"2025-09-17T00:26:56.056670Z","end":"2025-09-17T00:26:56.199005Z","steps":["trace[418724122] 'agreement among raft nodes before linearized reading'  (duration: 127.877595ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:26:56.200049Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.245194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:26:56.200792Z","caller":"traceutil/trace.go:172","msg":"trace[876245759] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:400; }","duration":"160.990525ms","start":"2025-09-17T00:26:56.039784Z","end":"2025-09-17T00:26:56.200775Z","steps":["trace[876245759] 'agreement among raft nodes before linearized reading'  (duration: 144.799345ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-17T00:26:56.203285Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.394536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-17T00:26:56.203518Z","caller":"traceutil/trace.go:172","msg":"trace[932224198] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:0; response_revision:400; }","duration":"163.631742ms","start":"2025-09-17T00:26:56.039873Z","end":"2025-09-17T00:26:56.203505Z","steps":["trace[932224198] 'agreement among raft nodes before linearized reading'  (duration: 144.705124ms)","trace[932224198] 'range keys from in-memory index tree'  (duration: 18.676185ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-17T00:26:59.291377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:26:59.335493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:27:21.297977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:27:21.329697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:27:21.377370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-17T00:27:21.400405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54884","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:33:04 up  3:15,  0 users,  load average: 0.23, 1.73, 3.14
	Linux addons-160127 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4bd3f29366f22225467229c5fd50ef3bad0c2613e1d1f3918a7083e5ae6e18ea] <==
	I0917 00:30:56.627694       1 main.go:301] handling current node
	I0917 00:31:06.627848       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:31:06.627963       1 main.go:301] handling current node
	I0917 00:31:16.631237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:31:16.631670       1 main.go:301] handling current node
	I0917 00:31:26.628840       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:31:26.628890       1 main.go:301] handling current node
	I0917 00:31:36.628662       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:31:36.628696       1 main.go:301] handling current node
	I0917 00:31:46.630471       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:31:46.630503       1 main.go:301] handling current node
	I0917 00:31:56.628635       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:31:56.628728       1 main.go:301] handling current node
	I0917 00:32:06.628220       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:32:06.628337       1 main.go:301] handling current node
	I0917 00:32:16.628627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:32:16.628658       1 main.go:301] handling current node
	I0917 00:32:26.632293       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:32:26.632410       1 main.go:301] handling current node
	I0917 00:32:36.627842       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:32:36.627949       1 main.go:301] handling current node
	I0917 00:32:46.632641       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:32:46.632677       1 main.go:301] handling current node
	I0917 00:32:56.632469       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0917 00:32:56.632506       1 main.go:301] handling current node
	
	
	==> kube-apiserver [504c0fce83d21d6bef5a7e21d91e5d9e7440bd9bffa92b74d666cc3578859612] <==
	E0917 00:29:07.740224       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42620: use of closed network connection
	I0917 00:29:08.048646       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:29:35.850123       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:29:40.712104       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.159.0"}
	E0917 00:30:00.873502       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0917 00:30:30.137911       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0917 00:30:31.400971       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:30:35.042652       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0917 00:30:40.469012       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0917 00:30:40.753420       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.18.41"}
	I0917 00:30:52.844937       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:30:52.845079       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 00:30:52.921783       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:30:52.921920       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 00:30:53.025149       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:30:53.025317       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0917 00:30:53.070381       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0917 00:30:53.070495       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0917 00:30:54.035825       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0917 00:30:54.070638       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0917 00:30:54.089180       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0917 00:31:00.743390       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:31:49.572640       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:32:03.319565       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0917 00:33:02.168903       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.106.8.111"}
	
	
	==> kube-controller-manager [b3f4469924d131c6be96f6b2fdaabf64fcfe565ed41e23a4e7f61dba1fc8af26] <==
	E0917 00:31:15.918465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:31:16.156512       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:31:16.157526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0917 00:31:21.433564       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0917 00:31:21.433602       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0917 00:31:21.530701       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0917 00:31:21.530757       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0917 00:31:30.608859       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:31:30.609945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:31:30.708391       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:31:30.709475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:31:33.946935       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:31:33.948104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:32:01.679590       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:32:01.680874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:32:19.603099       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:32:19.604147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:32:21.621266       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:32:21.622860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:32:43.477154       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:32:43.478220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:33:00.442554       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:33:00.443895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0917 00:33:00.605014       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0917 00:33:00.606057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [eeca9e39f37bcc76979c11631d0d15c557223f73ddc35a86d61173bea4373f3f] <==
	I0917 00:26:56.753714       1 server_linux.go:53] "Using iptables proxy"
	I0917 00:26:57.531380       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0917 00:26:57.633281       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0917 00:26:57.633431       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0917 00:26:57.633546       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 00:26:57.708216       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 00:26:57.708279       1 server_linux.go:132] "Using iptables Proxier"
	I0917 00:26:57.712384       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 00:26:57.713060       1 server.go:527] "Version info" version="v1.34.0"
	I0917 00:26:57.713090       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:26:57.748964       1 config.go:200] "Starting service config controller"
	I0917 00:26:57.748991       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0917 00:26:57.749032       1 config.go:106] "Starting endpoint slice config controller"
	I0917 00:26:57.749038       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0917 00:26:57.749049       1 config.go:403] "Starting serviceCIDR config controller"
	I0917 00:26:57.749053       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0917 00:26:57.846968       1 config.go:309] "Starting node config controller"
	I0917 00:26:57.946456       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0917 00:26:57.946566       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0917 00:26:57.859883       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0917 00:26:57.964768       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0917 00:26:57.987041       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2d2c7b707b15aae42cf3045baf067d199ebcf3f391eca4ad88894aadd6bbe005] <==
	I0917 00:26:45.449948       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 00:26:45.453599       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0917 00:26:45.454575       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:26:45.454715       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0917 00:26:45.454798       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0917 00:26:45.458744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0917 00:26:45.458835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0917 00:26:45.458881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0917 00:26:45.458953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0917 00:26:45.458999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0917 00:26:45.459058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0917 00:26:45.468929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0917 00:26:45.469369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0917 00:26:45.469410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0917 00:26:45.469442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0917 00:26:45.469475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0917 00:26:45.469510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0917 00:26:45.469544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0917 00:26:45.470074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0917 00:26:45.470186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0917 00:26:45.470283       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0917 00:26:45.470968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0917 00:26:45.471165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0917 00:26:45.471326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I0917 00:26:46.655609       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 17 00:32:43 addons-160127 kubelet[1564]: E0917 00:32:43.422629    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1114d8098e157e9258ca3860e3ba567a99023b86aea26c16fe8bd5ad13f44cbc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1114d8098e157e9258ca3860e3ba567a99023b86aea26c16fe8bd5ad13f44cbc/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.919306    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1114d8098e157e9258ca3860e3ba567a99023b86aea26c16fe8bd5ad13f44cbc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1114d8098e157e9258ca3860e3ba567a99023b86aea26c16fe8bd5ad13f44cbc/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.923656    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/524e24c236783566c85b0dee7d5b44b09c229645b30cab9ebebdad63cb5963c9/diff" to get inode usage: stat /var/lib/containers/storage/overlay/524e24c236783566c85b0dee7d5b44b09c229645b30cab9ebebdad63cb5963c9/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.924713    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1f2eb19e945d4cf191e0cb7b1d387c84a8f705538fba5270415b4e7ba48de3b8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1f2eb19e945d4cf191e0cb7b1d387c84a8f705538fba5270415b4e7ba48de3b8/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.925917    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/eb85f5919b1e449851f8d2aa541ba6cd75abfae2917024717fcad9bfcffe3cd6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/eb85f5919b1e449851f8d2aa541ba6cd75abfae2917024717fcad9bfcffe3cd6/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.925950    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2698304809a1daffb194e130629e0af41882d04f217fcda51836ec264edad5cb/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2698304809a1daffb194e130629e0af41882d04f217fcda51836ec264edad5cb/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.930150    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2b5fba703f197fcce99b9f3c42b8708384b329f325a0cfd61cb5c91501a1a72d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2b5fba703f197fcce99b9f3c42b8708384b329f325a0cfd61cb5c91501a1a72d/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.948433    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ae94230fc4578218327c7365ab3924dbe331061d8c3aa707508019a63b2c87b2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ae94230fc4578218327c7365ab3924dbe331061d8c3aa707508019a63b2c87b2/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.952695    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/524e24c236783566c85b0dee7d5b44b09c229645b30cab9ebebdad63cb5963c9/diff" to get inode usage: stat /var/lib/containers/storage/overlay/524e24c236783566c85b0dee7d5b44b09c229645b30cab9ebebdad63cb5963c9/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.962040    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1ea1401caeecff3fbf9d85235f068cc9ea4017bd398b65f37a0c52dab83095db/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1ea1401caeecff3fbf9d85235f068cc9ea4017bd398b65f37a0c52dab83095db/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.964301    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/276e6ce29d7afe98003b78101e203c9d4c0f8c06467b5e54bd84d26f5530df82/diff" to get inode usage: stat /var/lib/containers/storage/overlay/276e6ce29d7afe98003b78101e203c9d4c0f8c06467b5e54bd84d26f5530df82/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.965601    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/566c18feff0e3a3e5f9887e797b168dc6113ce2fc40e193a96dab98e50f69733/diff" to get inode usage: stat /var/lib/containers/storage/overlay/566c18feff0e3a3e5f9887e797b168dc6113ce2fc40e193a96dab98e50f69733/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.966711    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/83b252cc0032864beb96f6162efeb450846e1ad70a9a7d767c62c68b3fde5fbd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/83b252cc0032864beb96f6162efeb450846e1ad70a9a7d767c62c68b3fde5fbd/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.967915    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d76062a1365114a2ae02d79642276eadecdaa4209eee6cb6ba118e67775af06a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d76062a1365114a2ae02d79642276eadecdaa4209eee6cb6ba118e67775af06a/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.982196    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d5771668108205aa1d66356f2b3133d10d7c5a8abc7d4a07c1ea691f3735903a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d5771668108205aa1d66356f2b3133d10d7c5a8abc7d4a07c1ea691f3735903a/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.983292    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/1a636364522c5f7888307c52fef539d9db4a237c8f78b360b28bfd035339face/diff" to get inode usage: stat /var/lib/containers/storage/overlay/1a636364522c5f7888307c52fef539d9db4a237c8f78b360b28bfd035339face/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:46 addons-160127 kubelet[1564]: E0917 00:32:46.984388    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bf438281c02354e4f55b03aa69319f0fdbf169cf4496dffcaabcb5769d594777/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bf438281c02354e4f55b03aa69319f0fdbf169cf4496dffcaabcb5769d594777/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:47 addons-160127 kubelet[1564]: E0917 00:32:47.001325    1564 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/688b38db67dfa4f2fce10a7fbbc4af929c9c5a4716c9ec09eda12cf2cbe9d7bc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/688b38db67dfa4f2fce10a7fbbc4af929c9c5a4716c9ec09eda12cf2cbe9d7bc/diff: no such file or directory, extraDiskErr: <nil>
	Sep 17 00:32:47 addons-160127 kubelet[1564]: E0917 00:32:47.050790    1564 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069167050492913 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 17 00:32:47 addons-160127 kubelet[1564]: E0917 00:32:47.050965    1564 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069167050492913 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 17 00:32:57 addons-160127 kubelet[1564]: E0917 00:32:57.053470    1564 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758069177053192788 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 17 00:32:57 addons-160127 kubelet[1564]: E0917 00:32:57.053507    1564 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758069177053192788 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597481} inodes_used:{value:225}}"
	Sep 17 00:33:01 addons-160127 kubelet[1564]: I0917 00:33:01.991588    1564 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddg6b\" (UniqueName: \"kubernetes.io/projected/52f07d1d-47a5-41b9-9b8f-06539806f2db-kube-api-access-ddg6b\") pod \"hello-world-app-5d498dc89-kpnxw\" (UID: \"52f07d1d-47a5-41b9-9b8f-06539806f2db\") " pod="default/hello-world-app-5d498dc89-kpnxw"
	Sep 17 00:33:02 addons-160127 kubelet[1564]: W0917 00:33:02.342215    1564 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a07e817d82f4cc2007cdc12de7abcbcc9e8d712045b29bdd211aeb08d510d4bf/crio-2f3360586bb301cd0e7096a282cf2bf62bba5988b0a624ce11504625708109ae WatchSource:0}: Error finding container 2f3360586bb301cd0e7096a282cf2bf62bba5988b0a624ce11504625708109ae: Status 404 returned error can't find the container with id 2f3360586bb301cd0e7096a282cf2bf62bba5988b0a624ce11504625708109ae
	Sep 17 00:33:04 addons-160127 kubelet[1564]: I0917 00:33:04.222044    1564 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-kpnxw" podStartSLOduration=2.190248645 podStartE2EDuration="3.222025337s" podCreationTimestamp="2025-09-17 00:33:01 +0000 UTC" firstStartedPulling="2025-09-17 00:33:02.347133567 +0000 UTC m=+375.662325424" lastFinishedPulling="2025-09-17 00:33:03.378910267 +0000 UTC m=+376.694102116" observedRunningTime="2025-09-17 00:33:04.220675452 +0000 UTC m=+377.535867325" watchObservedRunningTime="2025-09-17 00:33:04.222025337 +0000 UTC m=+377.537217227"
	
	
	==> storage-provisioner [be793dccdb22ed50d1cd357ba7069fd79a3cccddf9a2ef68a63fa53546c5cc70] <==
	W0917 00:32:39.964329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:41.967042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:41.971200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:43.974400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:43.978534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:45.981509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:45.985888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:47.989016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:47.993938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:49.997279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:50.001882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:52.005013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:52.015522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:54.018480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:54.023717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:56.026837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:56.031369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:58.034596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:32:58.039702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:33:00.047722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:33:00.071008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:33:02.129822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:33:02.159877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:33:04.165758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0917 00:33:04.171092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-160127 -n addons-160127
helpers_test.go:269: (dbg) Run:  kubectl --context addons-160127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-w4bbl ingress-nginx-admission-patch-q77zt
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-160127 describe pod ingress-nginx-admission-create-w4bbl ingress-nginx-admission-patch-q77zt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-160127 describe pod ingress-nginx-admission-create-w4bbl ingress-nginx-admission-patch-q77zt: exit status 1 (95.463081ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-w4bbl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-q77zt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-160127 describe pod ingress-nginx-admission-create-w4bbl ingress-nginx-admission-patch-q77zt: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-160127 addons disable ingress-dns --alsologtostderr -v=1: (1.073573421s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-160127 addons disable ingress --alsologtostderr -v=1: (7.833240545s)
--- FAIL: TestAddons/parallel/Ingress (154.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-619464 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-619464 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-hcmp5" [64ba9e0d-ac00-456e-8cd0-8327bcd1d23d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-619464 -n functional-619464
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-17 00:47:09.068219984 +0000 UTC m=+1307.718005702
functional_test.go:1645: (dbg) Run:  kubectl --context functional-619464 describe po hello-node-connect-7d85dfc575-hcmp5 -n default
functional_test.go:1645: (dbg) kubectl --context functional-619464 describe po hello-node-connect-7d85dfc575-hcmp5 -n default:
Name:             hello-node-connect-7d85dfc575-hcmp5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-619464/192.168.49.2
Start Time:       Wed, 17 Sep 2025 00:37:08 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9k9zt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-9k9zt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hcmp5 to functional-619464
Normal   Pulling    7m4s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m4s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-619464 logs hello-node-connect-7d85dfc575-hcmp5 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-619464 logs hello-node-connect-7d85dfc575-hcmp5 -n default: exit status 1 (97.916548ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-hcmp5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-619464 logs hello-node-connect-7d85dfc575-hcmp5 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-619464 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-hcmp5
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-619464/192.168.49.2
Start Time:       Wed, 17 Sep 2025 00:37:08 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9k9zt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-9k9zt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hcmp5 to functional-619464
Normal   Pulling    7m4s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m4s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-619464 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-619464 logs -l app=hello-node-connect: exit status 1 (83.626445ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-hcmp5" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-619464 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-619464 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.43.163
IPs:                      10.98.43.163
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30524/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-619464
helpers_test.go:243: (dbg) docker inspect functional-619464:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "58aa50bb46c80b11850d5c5bfab306932af2f9aa75b605de8f062407060047a6",
	        "Created": "2025-09-17T00:34:22.930844808Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 877638,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-17T00:34:22.998283774Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/58aa50bb46c80b11850d5c5bfab306932af2f9aa75b605de8f062407060047a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/58aa50bb46c80b11850d5c5bfab306932af2f9aa75b605de8f062407060047a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/58aa50bb46c80b11850d5c5bfab306932af2f9aa75b605de8f062407060047a6/hosts",
	        "LogPath": "/var/lib/docker/containers/58aa50bb46c80b11850d5c5bfab306932af2f9aa75b605de8f062407060047a6/58aa50bb46c80b11850d5c5bfab306932af2f9aa75b605de8f062407060047a6-json.log",
	        "Name": "/functional-619464",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-619464:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-619464",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "58aa50bb46c80b11850d5c5bfab306932af2f9aa75b605de8f062407060047a6",
	                "LowerDir": "/var/lib/docker/overlay2/7b5c98e22caf74825d383c2da7d1986062fb915bce572e93725f9049d9c4aa97-init/diff:/var/lib/docker/overlay2/cd42a5ab2cf4c74437647f2d8b0837602d53b1f49cb4003f87c861b49a5e1d53/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b5c98e22caf74825d383c2da7d1986062fb915bce572e93725f9049d9c4aa97/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b5c98e22caf74825d383c2da7d1986062fb915bce572e93725f9049d9c4aa97/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b5c98e22caf74825d383c2da7d1986062fb915bce572e93725f9049d9c4aa97/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-619464",
	                "Source": "/var/lib/docker/volumes/functional-619464/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-619464",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-619464",
	                "name.minikube.sigs.k8s.io": "functional-619464",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c9363b5a6f81080a0674b3cc2723c62dcbcab4ea09e0fcc1b3f0cea931fd245",
	            "SandboxKey": "/var/run/docker/netns/5c9363b5a6f8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33568"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33569"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33572"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33570"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33571"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-619464": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:41:ff:61:23:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a5d1fb23cba13166ada5a76779a93464d5c378fd158b428d498d278a402bf32a",
	                    "EndpointID": "fc938c3db78415db99e2d9a4d458e480b1d30d251c46d33bcd4162a19cf1d16d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-619464",
	                        "58aa50bb46c8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-619464 -n functional-619464
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-619464 logs -n 25: (1.746657929s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ ssh     │ functional-619464 ssh sudo crictl images                                                                 │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ ssh     │ functional-619464 ssh sudo crictl rmi registry.k8s.io/pause:latest                                       │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ ssh     │ functional-619464 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │                     │
	│ cache   │ functional-619464 cache reload                                                                           │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ ssh     │ functional-619464 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ kubectl │ functional-619464 kubectl -- --context functional-619464 get pods                                        │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ start   │ -p functional-619464 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ service │ invalid-svc -p functional-619464                                                                         │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │                     │
	│ config  │ functional-619464 config unset cpus                                                                      │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ ssh     │ functional-619464 ssh echo hello                                                                         │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ config  │ functional-619464 config get cpus                                                                        │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │                     │
	│ config  │ functional-619464 config set cpus 2                                                                      │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ config  │ functional-619464 config get cpus                                                                        │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ config  │ functional-619464 config unset cpus                                                                      │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ ssh     │ functional-619464 ssh cat /etc/hostname                                                                  │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │ 17 Sep 25 00:36 UTC │
	│ config  │ functional-619464 config get cpus                                                                        │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │                     │
	│ tunnel  │ functional-619464 tunnel --alsologtostderr                                                               │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │                     │
	│ tunnel  │ functional-619464 tunnel --alsologtostderr                                                               │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │                     │
	│ tunnel  │ functional-619464 tunnel --alsologtostderr                                                               │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:36 UTC │                     │
	│ addons  │ functional-619464 addons list                                                                            │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:37 UTC │ 17 Sep 25 00:37 UTC │
	│ addons  │ functional-619464 addons list -o json                                                                    │ functional-619464 │ jenkins │ v1.37.0 │ 17 Sep 25 00:37 UTC │ 17 Sep 25 00:37 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:36:10
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:36:10.821397  882338 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:36:10.821590  882338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:36:10.821595  882338 out.go:374] Setting ErrFile to fd 2...
	I0917 00:36:10.821598  882338 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:36:10.821947  882338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
	I0917 00:36:10.822516  882338 out.go:368] Setting JSON to false
	I0917 00:36:10.823637  882338 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11909,"bootTime":1758057462,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0917 00:36:10.823712  882338 start.go:140] virtualization:  
	I0917 00:36:10.827309  882338 out.go:179] * [functional-619464] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0917 00:36:10.831167  882338 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:36:10.831247  882338 notify.go:220] Checking for updates...
	I0917 00:36:10.836994  882338 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:36:10.839960  882338 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig
	I0917 00:36:10.842887  882338 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube
	I0917 00:36:10.845835  882338 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0917 00:36:10.848765  882338 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:36:10.852165  882338 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:36:10.852257  882338 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:36:10.879357  882338 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0917 00:36:10.879466  882338 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:36:10.943257  882338 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-17 00:36:10.934150735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0917 00:36:10.943363  882338 docker.go:318] overlay module found
	I0917 00:36:10.946559  882338 out.go:179] * Using the docker driver based on existing profile
	I0917 00:36:10.949416  882338 start.go:304] selected driver: docker
	I0917 00:36:10.949426  882338 start.go:918] validating driver "docker" against &{Name:functional-619464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-619464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disa
bleCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:36:10.949536  882338 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:36:10.949652  882338 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:36:11.013689  882338 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-17 00:36:11.002556848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0917 00:36:11.014118  882338 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 00:36:11.014135  882338 cni.go:84] Creating CNI manager for ""
	I0917 00:36:11.014190  882338 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 00:36:11.014230  882338 start.go:348] cluster config:
	{Name:functional-619464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-619464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:36:11.017452  882338 out.go:179] * Starting "functional-619464" primary control-plane node in "functional-619464" cluster
	I0917 00:36:11.020395  882338 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:36:11.023288  882338 out.go:179] * Pulling base image v0.0.48 ...
	I0917 00:36:11.026239  882338 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:36:11.026292  882338 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:36:11.026343  882338 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-857204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0917 00:36:11.026354  882338 cache.go:58] Caching tarball of preloaded images
	I0917 00:36:11.026443  882338 preload.go:172] Found /home/jenkins/minikube-integration/21550-857204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0917 00:36:11.026452  882338 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0917 00:36:11.026571  882338 profile.go:143] Saving config to /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/config.json ...
	I0917 00:36:11.047709  882338 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0917 00:36:11.047720  882338 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0917 00:36:11.047737  882338 cache.go:232] Successfully downloaded all kic artifacts
	I0917 00:36:11.047758  882338 start.go:360] acquireMachinesLock for functional-619464: {Name:mk957538915d91e0c3f809cd817ca801e466fa52 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 00:36:11.047830  882338 start.go:364] duration metric: took 55.862µs to acquireMachinesLock for "functional-619464"
	I0917 00:36:11.047849  882338 start.go:96] Skipping create...Using existing machine configuration
	I0917 00:36:11.047853  882338 fix.go:54] fixHost starting: 
	I0917 00:36:11.048236  882338 cli_runner.go:164] Run: docker container inspect functional-619464 --format={{.State.Status}}
	I0917 00:36:11.065304  882338 fix.go:112] recreateIfNeeded on functional-619464: state=Running err=<nil>
	W0917 00:36:11.065330  882338 fix.go:138] unexpected machine state, will restart: <nil>
	I0917 00:36:11.068545  882338 out.go:252] * Updating the running docker "functional-619464" container ...
	I0917 00:36:11.068614  882338 machine.go:93] provisionDockerMachine start ...
	I0917 00:36:11.068806  882338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
	I0917 00:36:11.088610  882338 main.go:141] libmachine: Using SSH client type: native
	I0917 00:36:11.088992  882338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33568 <nil> <nil>}
	I0917 00:36:11.088999  882338 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 00:36:11.228185  882338 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-619464
	
	I0917 00:36:11.228207  882338 ubuntu.go:182] provisioning hostname "functional-619464"
	I0917 00:36:11.228270  882338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
	I0917 00:36:11.245847  882338 main.go:141] libmachine: Using SSH client type: native
	I0917 00:36:11.246133  882338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33568 <nil> <nil>}
	I0917 00:36:11.246143  882338 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-619464 && echo "functional-619464" | sudo tee /etc/hostname
	I0917 00:36:11.396271  882338 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-619464
	
	I0917 00:36:11.396355  882338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
	I0917 00:36:11.413477  882338 main.go:141] libmachine: Using SSH client type: native
	I0917 00:36:11.413782  882338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33568 <nil> <nil>}
	I0917 00:36:11.413797  882338 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-619464' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-619464/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-619464' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 00:36:11.557015  882338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 00:36:11.557030  882338 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21550-857204/.minikube CaCertPath:/home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21550-857204/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21550-857204/.minikube}
	I0917 00:36:11.557047  882338 ubuntu.go:190] setting up certificates
	I0917 00:36:11.557057  882338 provision.go:84] configureAuth start
	I0917 00:36:11.557119  882338 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-619464
	I0917 00:36:11.575600  882338 provision.go:143] copyHostCerts
	I0917 00:36:11.575672  882338 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-857204/.minikube/ca.pem, removing ...
	I0917 00:36:11.575687  882338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-857204/.minikube/ca.pem
	I0917 00:36:11.575764  882338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21550-857204/.minikube/ca.pem (1078 bytes)
	I0917 00:36:11.575903  882338 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-857204/.minikube/cert.pem, removing ...
	I0917 00:36:11.575908  882338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-857204/.minikube/cert.pem
	I0917 00:36:11.575933  882338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21550-857204/.minikube/cert.pem (1123 bytes)
	I0917 00:36:11.575994  882338 exec_runner.go:144] found /home/jenkins/minikube-integration/21550-857204/.minikube/key.pem, removing ...
	I0917 00:36:11.575998  882338 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21550-857204/.minikube/key.pem
	I0917 00:36:11.576022  882338 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21550-857204/.minikube/key.pem (1679 bytes)
	I0917 00:36:11.576069  882338 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21550-857204/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca-key.pem org=jenkins.functional-619464 san=[127.0.0.1 192.168.49.2 functional-619464 localhost minikube]
	I0917 00:36:12.225932  882338 provision.go:177] copyRemoteCerts
	I0917 00:36:12.225983  882338 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 00:36:12.226024  882338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
	I0917 00:36:12.243673  882338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/functional-619464/id_rsa Username:docker}
	I0917 00:36:12.341158  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0917 00:36:12.365313  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 00:36:12.388995  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0917 00:36:12.413312  882338 provision.go:87] duration metric: took 856.231417ms to configureAuth
	I0917 00:36:12.413328  882338 ubuntu.go:206] setting minikube options for container-runtime
	I0917 00:36:12.413539  882338 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:36:12.413638  882338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
	I0917 00:36:12.434931  882338 main.go:141] libmachine: Using SSH client type: native
	I0917 00:36:12.435235  882338 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33568 <nil> <nil>}
	I0917 00:36:12.435247  882338 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0917 00:36:17.862295  882338 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0917 00:36:17.862313  882338 machine.go:96] duration metric: took 6.793692528s to provisionDockerMachine
	I0917 00:36:17.862322  882338 start.go:293] postStartSetup for "functional-619464" (driver="docker")
	I0917 00:36:17.862341  882338 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 00:36:17.862399  882338 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 00:36:17.862444  882338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
	I0917 00:36:17.880396  882338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/functional-619464/id_rsa Username:docker}
	I0917 00:36:17.977469  882338 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 00:36:17.980738  882338 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 00:36:17.980761  882338 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 00:36:17.980770  882338 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 00:36:17.980776  882338 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 00:36:17.980785  882338 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-857204/.minikube/addons for local assets ...
	I0917 00:36:17.980837  882338 filesync.go:126] Scanning /home/jenkins/minikube-integration/21550-857204/.minikube/files for local assets ...
	I0917 00:36:17.980908  882338 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-857204/.minikube/files/etc/ssl/certs/8590532.pem -> 8590532.pem in /etc/ssl/certs
	I0917 00:36:17.980979  882338 filesync.go:149] local asset: /home/jenkins/minikube-integration/21550-857204/.minikube/files/etc/test/nested/copy/859053/hosts -> hosts in /etc/test/nested/copy/859053
	I0917 00:36:17.981022  882338 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/859053
	I0917 00:36:17.989427  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/files/etc/ssl/certs/8590532.pem --> /etc/ssl/certs/8590532.pem (1708 bytes)
	I0917 00:36:18.018392  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/files/etc/test/nested/copy/859053/hosts --> /etc/test/nested/copy/859053/hosts (40 bytes)
	I0917 00:36:18.044434  882338 start.go:296] duration metric: took 182.09585ms for postStartSetup
	I0917 00:36:18.044511  882338 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:36:18.044592  882338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
	I0917 00:36:18.066995  882338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/functional-619464/id_rsa Username:docker}
	I0917 00:36:18.161901  882338 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 00:36:18.167242  882338 fix.go:56] duration metric: took 7.119381072s for fixHost
	I0917 00:36:18.167257  882338 start.go:83] releasing machines lock for "functional-619464", held for 7.119420908s
	I0917 00:36:18.167332  882338 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-619464
	I0917 00:36:18.184449  882338 ssh_runner.go:195] Run: cat /version.json
	I0917 00:36:18.184496  882338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
	I0917 00:36:18.184763  882338 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 00:36:18.184817  882338 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
	I0917 00:36:18.205400  882338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/functional-619464/id_rsa Username:docker}
	I0917 00:36:18.206692  882338 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/functional-619464/id_rsa Username:docker}
	I0917 00:36:18.425056  882338 ssh_runner.go:195] Run: systemctl --version
	I0917 00:36:18.429342  882338 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0917 00:36:18.573753  882338 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 00:36:18.578246  882338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:36:18.587332  882338 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0917 00:36:18.587413  882338 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 00:36:18.596890  882338 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0917 00:36:18.596905  882338 start.go:495] detecting cgroup driver to use...
	I0917 00:36:18.596937  882338 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 00:36:18.596990  882338 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0917 00:36:18.611120  882338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0917 00:36:18.623418  882338 docker.go:218] disabling cri-docker service (if available) ...
	I0917 00:36:18.623473  882338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 00:36:18.637160  882338 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 00:36:18.649348  882338 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 00:36:18.781497  882338 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 00:36:18.907439  882338 docker.go:234] disabling docker service ...
	I0917 00:36:18.907525  882338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 00:36:18.920873  882338 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 00:36:18.933190  882338 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 00:36:19.058547  882338 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 00:36:19.185245  882338 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 00:36:19.197029  882338 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 00:36:19.213118  882338 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0917 00:36:19.213195  882338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:36:19.222832  882338 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0917 00:36:19.222904  882338 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:36:19.233142  882338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:36:19.243268  882338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:36:19.253539  882338 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 00:36:19.262906  882338 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:36:19.273027  882338 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:36:19.282895  882338 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0917 00:36:19.292735  882338 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 00:36:19.301247  882338 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 00:36:19.309879  882338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:36:19.438527  882338 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0917 00:36:23.993567  882338 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.555018076s)
	I0917 00:36:23.993583  882338 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0917 00:36:23.993651  882338 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0917 00:36:23.997306  882338 start.go:563] Will wait 60s for crictl version
	I0917 00:36:23.997361  882338 ssh_runner.go:195] Run: which crictl
	I0917 00:36:24.000728  882338 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 00:36:24.040425  882338 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0917 00:36:24.040502  882338 ssh_runner.go:195] Run: crio --version
	I0917 00:36:24.082896  882338 ssh_runner.go:195] Run: crio --version
	I0917 00:36:24.126344  882338 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0917 00:36:24.129242  882338 cli_runner.go:164] Run: docker network inspect functional-619464 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 00:36:24.145869  882338 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 00:36:24.152925  882338 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0917 00:36:24.155931  882338 kubeadm.go:875] updating cluster {Name:functional-619464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-619464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bina
ryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 00:36:24.156067  882338 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:36:24.156146  882338 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:36:24.205711  882338 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:36:24.205722  882338 crio.go:433] Images already preloaded, skipping extraction
	I0917 00:36:24.205776  882338 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 00:36:24.243540  882338 crio.go:514] all images are preloaded for cri-o runtime.
	I0917 00:36:24.243552  882338 cache_images.go:85] Images are preloaded, skipping loading
	I0917 00:36:24.243559  882338 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0917 00:36:24.243659  882338 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-619464 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-619464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 00:36:24.243735  882338 ssh_runner.go:195] Run: crio config
	I0917 00:36:24.312845  882338 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0917 00:36:24.312881  882338 cni.go:84] Creating CNI manager for ""
	I0917 00:36:24.312893  882338 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 00:36:24.312900  882338 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 00:36:24.312921  882338 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-619464 NodeName:functional-619464 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 00:36:24.313068  882338 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-619464"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 00:36:24.313149  882338 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0917 00:36:24.322271  882338 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 00:36:24.322330  882338 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 00:36:24.331323  882338 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0917 00:36:24.349278  882338 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 00:36:24.367441  882338 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I0917 00:36:24.385748  882338 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0917 00:36:24.389420  882338 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 00:36:24.518907  882338 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 00:36:24.531626  882338 certs.go:68] Setting up /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464 for IP: 192.168.49.2
	I0917 00:36:24.531638  882338 certs.go:194] generating shared ca certs ...
	I0917 00:36:24.531652  882338 certs.go:226] acquiring lock for ca certs: {Name:mk44de2cd489e13684c1d414a8a1e69ffc09119b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 00:36:24.531786  882338 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21550-857204/.minikube/ca.key
	I0917 00:36:24.531827  882338 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21550-857204/.minikube/proxy-client-ca.key
	I0917 00:36:24.531833  882338 certs.go:256] generating profile certs ...
	I0917 00:36:24.531916  882338 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.key
	I0917 00:36:24.531970  882338 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/apiserver.key.9f714abf
	I0917 00:36:24.532017  882338 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/proxy-client.key
	I0917 00:36:24.532521  882338 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/859053.pem (1338 bytes)
	W0917 00:36:24.532657  882338 certs.go:480] ignoring /home/jenkins/minikube-integration/21550-857204/.minikube/certs/859053_empty.pem, impossibly tiny 0 bytes
	I0917 00:36:24.532667  882338 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 00:36:24.532711  882338 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/ca.pem (1078 bytes)
	I0917 00:36:24.532735  882338 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/cert.pem (1123 bytes)
	I0917 00:36:24.533226  882338 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-857204/.minikube/certs/key.pem (1679 bytes)
	I0917 00:36:24.533327  882338 certs.go:484] found cert: /home/jenkins/minikube-integration/21550-857204/.minikube/files/etc/ssl/certs/8590532.pem (1708 bytes)
	I0917 00:36:24.534271  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 00:36:24.561901  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0917 00:36:24.587110  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 00:36:24.612324  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0917 00:36:24.636717  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0917 00:36:24.661476  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0917 00:36:24.687421  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 00:36:24.711694  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0917 00:36:24.736366  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/files/etc/ssl/certs/8590532.pem --> /usr/share/ca-certificates/8590532.pem (1708 bytes)
	I0917 00:36:24.760473  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 00:36:24.785626  882338 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21550-857204/.minikube/certs/859053.pem --> /usr/share/ca-certificates/859053.pem (1338 bytes)
	I0917 00:36:24.809766  882338 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 00:36:24.828587  882338 ssh_runner.go:195] Run: openssl version
	I0917 00:36:24.834392  882338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8590532.pem && ln -fs /usr/share/ca-certificates/8590532.pem /etc/ssl/certs/8590532.pem"
	I0917 00:36:24.844177  882338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8590532.pem
	I0917 00:36:24.847794  882338 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 17 00:34 /usr/share/ca-certificates/8590532.pem
	I0917 00:36:24.847860  882338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8590532.pem
	I0917 00:36:24.854994  882338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8590532.pem /etc/ssl/certs/3ec20f2e.0"
	I0917 00:36:24.864787  882338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 00:36:24.874506  882338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:36:24.878196  882338 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 00:26 /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:36:24.878255  882338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 00:36:24.885468  882338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 00:36:24.895032  882338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/859053.pem && ln -fs /usr/share/ca-certificates/859053.pem /etc/ssl/certs/859053.pem"
	I0917 00:36:24.905031  882338 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/859053.pem
	I0917 00:36:24.908656  882338 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 17 00:34 /usr/share/ca-certificates/859053.pem
	I0917 00:36:24.908713  882338 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/859053.pem
	I0917 00:36:24.915948  882338 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/859053.pem /etc/ssl/certs/51391683.0"
	I0917 00:36:24.925190  882338 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 00:36:24.929106  882338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0917 00:36:24.936233  882338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0917 00:36:24.943420  882338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0917 00:36:24.950572  882338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0917 00:36:24.957591  882338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0917 00:36:24.964657  882338 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0917 00:36:24.971891  882338 kubeadm.go:392] StartCluster: {Name:functional-619464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-619464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:36:24.971989  882338 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0917 00:36:24.972055  882338 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 00:36:25.013343  882338 cri.go:89] found id: "afcce67537073e6b37a0faf8c015c49b1e9437d634f6f209499713a1bc8b78f7"
	I0917 00:36:25.013359  882338 cri.go:89] found id: "e994b9a64408ef6407da17b19de963c830c0c21c06fba19333eea0855e3ad588"
	I0917 00:36:25.013364  882338 cri.go:89] found id: "d4e06fb37c833a9cde9c60e7057fc6dadba413c3674515d2013ce659b42eef3c"
	I0917 00:36:25.013367  882338 cri.go:89] found id: "32bca8144a9d9e614ff60955a38c49af64309c9cc2e9ffbcd99823e8db8b3525"
	I0917 00:36:25.013370  882338 cri.go:89] found id: "a268764f344a861374441228d7b52ac33aec3b455c1884931941041a0fe9c372"
	I0917 00:36:25.013373  882338 cri.go:89] found id: "361273c3e60505fe3d70c60b3d4d8a18ec5801c26d20f8ff649580b9b17d5e0a"
	I0917 00:36:25.013376  882338 cri.go:89] found id: "5ed2619693ffcd52858d052034bd3994978da8cd6fa1500f5d83a0544478803d"
	I0917 00:36:25.013379  882338 cri.go:89] found id: "a04e63445199b554b4b29c79daa31e76259884045abe0df5dae299d410e1e2f8"
	I0917 00:36:25.013381  882338 cri.go:89] found id: "5ccaced789dc750682ffc111b6d60a23d6641dbf56a0cdfe5baf873fcea6f687"
	I0917 00:36:25.013389  882338 cri.go:89] found id: "3e756f6e0f691588a7cf273b4feda83641656b03d2addf6e05cd8546cc757522"
	I0917 00:36:25.013392  882338 cri.go:89] found id: "7f386c3212e16cfcea1657ce9cdc0f6388a416b407fb76be8a41aeee68f81cb3"
	I0917 00:36:25.013409  882338 cri.go:89] found id: "35922ce4b265eee3a19e5a553e87f7a75072f595ddc9e4f394d18d27e04d94ca"
	I0917 00:36:25.013412  882338 cri.go:89] found id: "47d907a752936d399530d49e515f926a13c09fa2589f4fffeed8bc2ccacb92e3"
	I0917 00:36:25.013414  882338 cri.go:89] found id: "b7268d704c51903fc1a30a32a2f9c481af282d99e8b3dac058ea15fbd99c555a"
	I0917 00:36:25.013416  882338 cri.go:89] found id: "2514d365b6eacc4915073738e658c673f45566bf5db078c501b0adb7c3e4fd43"
	I0917 00:36:25.013422  882338 cri.go:89] found id: "acd98d27f51933bf90ca50354045600c1004477d8b687595bd272cd258f47a18"
	I0917 00:36:25.013424  882338 cri.go:89] found id: ""
	I0917 00:36:25.013482  882338 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-619464 -n functional-619464
helpers_test.go:269: (dbg) Run:  kubectl --context functional-619464 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-wm2g6 hello-node-connect-7d85dfc575-hcmp5
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-619464 describe pod hello-node-75c85bcc94-wm2g6 hello-node-connect-7d85dfc575-hcmp5
helpers_test.go:290: (dbg) kubectl --context functional-619464 describe pod hello-node-75c85bcc94-wm2g6 hello-node-connect-7d85dfc575-hcmp5:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-wm2g6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-619464/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:37:23 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wd4bg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wd4bg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m49s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wm2g6 to functional-619464
	  Normal   Pulling    6m46s (x5 over 9m49s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m46s (x5 over 9m49s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m46s (x5 over 9m49s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m42s (x20 over 9m49s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m28s (x21 over 9m49s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-hcmp5
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-619464/192.168.49.2
	Start Time:       Wed, 17 Sep 2025 00:37:08 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9k9zt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9k9zt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-hcmp5 to functional-619464
	  Normal   Pulling    7m7s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m7s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m7s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x42 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-619464 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-619464 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-wm2g6" [23a658be-31d4-4c09-8c2f-d90b6af3a5ea] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0917 00:38:56.176549  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:39:23.884492  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:43:56.176589  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-619464 -n functional-619464
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-17 00:47:23.596062209 +0000 UTC m=+1322.245847928
functional_test.go:1460: (dbg) Run:  kubectl --context functional-619464 describe po hello-node-75c85bcc94-wm2g6 -n default
functional_test.go:1460: (dbg) kubectl --context functional-619464 describe po hello-node-75c85bcc94-wm2g6 -n default:
Name:             hello-node-75c85bcc94-wm2g6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-619464/192.168.49.2
Start Time:       Wed, 17 Sep 2025 00:37:23 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wd4bg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wd4bg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wm2g6 to functional-619464
Normal   Pulling    6m57s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m57s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m57s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m53s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m39s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-619464 logs hello-node-75c85bcc94-wm2g6 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-619464 logs hello-node-75c85bcc94-wm2g6 -n default: exit status 1 (104.146337ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-wm2g6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-619464 logs hello-node-75c85bcc94-wm2g6 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-619464 service --namespace=default --https --url hello-node: exit status 115 (382.051854ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30450
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-619464 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-619464 service hello-node --url --format={{.IP}}: exit status 115 (577.15445ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-619464 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-619464 service hello-node --url: exit status 115 (462.421419ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30450
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-619464 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30450
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    

Test pass (294/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 13.41
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 16.45
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.08
18 TestDownloadOnly/v1.34.0/DeleteAll 0.21
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.65
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 181.6
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 11.86
35 TestAddons/parallel/Registry 17
36 TestAddons/parallel/RegistryCreds 0.76
38 TestAddons/parallel/InspektorGadget 6.3
39 TestAddons/parallel/MetricsServer 5.84
41 TestAddons/parallel/CSI 61.77
42 TestAddons/parallel/Headlamp 18.94
43 TestAddons/parallel/CloudSpanner 5.64
44 TestAddons/parallel/LocalPath 54.74
45 TestAddons/parallel/NvidiaDevicePlugin 5.91
46 TestAddons/parallel/Yakd 11.81
48 TestAddons/StoppedEnableDisable 12.15
49 TestCertOptions 33.71
50 TestCertExpiration 253.83
52 TestForceSystemdFlag 44.65
53 TestForceSystemdEnv 44.12
59 TestErrorSpam/setup 32.42
60 TestErrorSpam/start 0.78
61 TestErrorSpam/status 1.05
62 TestErrorSpam/pause 1.82
63 TestErrorSpam/unpause 1.86
64 TestErrorSpam/stop 1.42
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 75.01
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 29.7
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.8
76 TestFunctional/serial/CacheCmd/cache/add_local 1.54
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.1
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 38.81
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.78
87 TestFunctional/serial/LogsFileCmd 1.79
88 TestFunctional/serial/InvalidService 4.34
90 TestFunctional/parallel/ConfigCmd 0.46
91 TestFunctional/parallel/DashboardCmd 9.43
92 TestFunctional/parallel/DryRun 0.56
93 TestFunctional/parallel/InternationalLanguage 0.26
94 TestFunctional/parallel/StatusCmd 1.34
99 TestFunctional/parallel/AddonsCmd 0.19
100 TestFunctional/parallel/PersistentVolumeClaim 24.56
102 TestFunctional/parallel/SSHCmd 0.79
103 TestFunctional/parallel/CpCmd 1.96
105 TestFunctional/parallel/FileSync 0.35
106 TestFunctional/parallel/CertSync 2.15
110 TestFunctional/parallel/NodeLabels 0.14
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
114 TestFunctional/parallel/License 0.31
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.35
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.1
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
128 TestFunctional/parallel/ProfileCmd/profile_list 0.41
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
130 TestFunctional/parallel/MountCmd/any-port 8.86
131 TestFunctional/parallel/MountCmd/specific-port 1.93
132 TestFunctional/parallel/ServiceCmd/List 0.69
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.69
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.85
138 TestFunctional/parallel/Version/short 0.11
139 TestFunctional/parallel/Version/components 1.4
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
144 TestFunctional/parallel/ImageCommands/ImageBuild 4.04
145 TestFunctional/parallel/ImageCommands/Setup 0.66
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.66
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.76
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.63
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.69
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.71
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.97
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.71
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 200.82
164 TestMultiControlPlane/serial/DeployApp 9.8
165 TestMultiControlPlane/serial/PingHostFromPods 1.59
166 TestMultiControlPlane/serial/AddWorkerNode 60.48
167 TestMultiControlPlane/serial/NodeLabels 0.1
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.1
169 TestMultiControlPlane/serial/CopyFile 19.41
170 TestMultiControlPlane/serial/StopSecondaryNode 3.04
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
172 TestMultiControlPlane/serial/RestartSecondaryNode 30.37
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.35
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 124.2
175 TestMultiControlPlane/serial/DeleteSecondaryNode 12.61
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
177 TestMultiControlPlane/serial/StopCluster 35.94
178 TestMultiControlPlane/serial/RestartCluster 82.37
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.75
180 TestMultiControlPlane/serial/AddSecondaryNode 70.15
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.01
185 TestJSONOutput/start/Command 76.49
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.77
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.68
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.86
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 48.09
211 TestKicCustomNetwork/use_default_bridge_network 36.55
212 TestKicExistingNetwork 34.15
213 TestKicCustomSubnet 33.66
214 TestKicStaticIP 34.83
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 72.68
219 TestMountStart/serial/StartWithMountFirst 7.22
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 6.64
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.39
225 TestMountStart/serial/Stop 1.21
226 TestMountStart/serial/RestartStopped 8.26
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 134.35
231 TestMultiNode/serial/DeployApp2Nodes 6.05
232 TestMultiNode/serial/PingHostFrom2Pods 0.96
233 TestMultiNode/serial/AddNode 55.03
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.7
236 TestMultiNode/serial/CopyFile 10.2
237 TestMultiNode/serial/StopNode 2.27
238 TestMultiNode/serial/StartAfterStop 8.21
239 TestMultiNode/serial/RestartKeepsNodes 78.97
240 TestMultiNode/serial/DeleteNode 5.61
241 TestMultiNode/serial/StopMultiNode 23.83
242 TestMultiNode/serial/RestartMultiNode 57.71
243 TestMultiNode/serial/ValidateNameConflict 34.26
248 TestPreload 141.2
250 TestScheduledStopUnix 106.65
253 TestInsufficientStorage 10.88
254 TestRunningBinaryUpgrade 55.06
256 TestKubernetesUpgrade 342.2
257 TestMissingContainerUpgrade 122.8
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
260 TestNoKubernetes/serial/StartWithK8s 45.09
261 TestNoKubernetes/serial/StartWithStopK8s 8.45
262 TestNoKubernetes/serial/Start 9.69
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
264 TestNoKubernetes/serial/ProfileList 1.35
265 TestNoKubernetes/serial/Stop 1.27
266 TestNoKubernetes/serial/StartNoArgs 7.31
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
268 TestStoppedBinaryUpgrade/Setup 1.25
269 TestStoppedBinaryUpgrade/Upgrade 63.8
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.31
279 TestPause/serial/Start 85.22
280 TestPause/serial/SecondStartNoReconfiguration 25.09
281 TestPause/serial/Pause 0.82
282 TestPause/serial/VerifyStatus 0.32
283 TestPause/serial/Unpause 0.7
284 TestPause/serial/PauseAgain 1.27
285 TestPause/serial/DeletePaused 2.79
286 TestPause/serial/VerifyDeletedResources 0.49
294 TestNetworkPlugins/group/false 4.94
299 TestStartStop/group/old-k8s-version/serial/FirstStart 59.16
300 TestStartStop/group/old-k8s-version/serial/DeployApp 9.58
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
302 TestStartStop/group/old-k8s-version/serial/Stop 12.21
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
304 TestStartStop/group/old-k8s-version/serial/SecondStart 50.67
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
308 TestStartStop/group/old-k8s-version/serial/Pause 3.17
310 TestStartStop/group/no-preload/serial/FirstStart 75.33
312 TestStartStop/group/embed-certs/serial/FirstStart 84.25
313 TestStartStop/group/no-preload/serial/DeployApp 10.49
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
315 TestStartStop/group/no-preload/serial/Stop 11.96
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/no-preload/serial/SecondStart 57.94
318 TestStartStop/group/embed-certs/serial/DeployApp 11.51
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.62
320 TestStartStop/group/embed-certs/serial/Stop 12.66
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
322 TestStartStop/group/embed-certs/serial/SecondStart 55.8
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
326 TestStartStop/group/no-preload/serial/Pause 3.01
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 78.94
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
332 TestStartStop/group/embed-certs/serial/Pause 3.99
334 TestStartStop/group/newest-cni/serial/FirstStart 36.96
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.37
337 TestStartStop/group/newest-cni/serial/Stop 1.22
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
339 TestStartStop/group/newest-cni/serial/SecondStart 20.25
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.76
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.62
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.2
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
346 TestStartStop/group/newest-cni/serial/Pause 2.99
347 TestNetworkPlugins/group/auto/Start 85.72
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 65.34
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.13
354 TestNetworkPlugins/group/kindnet/Start 85.57
355 TestNetworkPlugins/group/auto/KubeletFlags 0.4
356 TestNetworkPlugins/group/auto/NetCatPod 11.33
357 TestNetworkPlugins/group/auto/DNS 0.22
358 TestNetworkPlugins/group/auto/Localhost 0.21
359 TestNetworkPlugins/group/auto/HairPin 0.18
360 TestNetworkPlugins/group/calico/Start 62.12
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
363 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/kindnet/DNS 0.21
366 TestNetworkPlugins/group/kindnet/Localhost 0.17
367 TestNetworkPlugins/group/kindnet/HairPin 0.15
368 TestNetworkPlugins/group/calico/KubeletFlags 0.31
369 TestNetworkPlugins/group/calico/NetCatPod 12.28
370 TestNetworkPlugins/group/calico/DNS 0.22
371 TestNetworkPlugins/group/calico/Localhost 0.27
372 TestNetworkPlugins/group/calico/HairPin 0.19
373 TestNetworkPlugins/group/custom-flannel/Start 67.78
374 TestNetworkPlugins/group/enable-default-cni/Start 46.21
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.28
377 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
378 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.39
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
382 TestNetworkPlugins/group/custom-flannel/DNS 0.17
383 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
384 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
385 TestNetworkPlugins/group/flannel/Start 67.63
386 TestNetworkPlugins/group/bridge/Start 88.02
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
389 TestNetworkPlugins/group/flannel/NetCatPod 10.26
390 TestNetworkPlugins/group/flannel/DNS 0.19
391 TestNetworkPlugins/group/flannel/Localhost 0.15
392 TestNetworkPlugins/group/flannel/HairPin 0.16
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
394 TestNetworkPlugins/group/bridge/NetCatPod 11.35
395 TestNetworkPlugins/group/bridge/DNS 0.23
396 TestNetworkPlugins/group/bridge/Localhost 0.19
397 TestNetworkPlugins/group/bridge/HairPin 0.23
x
+
TestDownloadOnly/v1.28.0/json-events (13.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-340192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-340192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.407447687s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (13.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0917 00:25:34.795237  859053 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0917 00:25:34.795320  859053 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-857204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-340192
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-340192: exit status 85 (84.493569ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-340192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-340192 │ jenkins │ v1.37.0 │ 17 Sep 25 00:25 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:25:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:25:21.432407  859058 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:25:21.432635  859058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:25:21.432665  859058 out.go:374] Setting ErrFile to fd 2...
	I0917 00:25:21.432684  859058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:25:21.432994  859058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
	W0917 00:25:21.433180  859058 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21550-857204/.minikube/config/config.json: open /home/jenkins/minikube-integration/21550-857204/.minikube/config/config.json: no such file or directory
	I0917 00:25:21.433639  859058 out.go:368] Setting JSON to true
	I0917 00:25:21.434523  859058 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11260,"bootTime":1758057462,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0917 00:25:21.434615  859058 start.go:140] virtualization:  
	I0917 00:25:21.438777  859058 out.go:99] [download-only-340192] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W0917 00:25:21.438924  859058 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21550-857204/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 00:25:21.438982  859058 notify.go:220] Checking for updates...
	I0917 00:25:21.441873  859058 out.go:171] MINIKUBE_LOCATION=21550
	I0917 00:25:21.444831  859058 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:25:21.447795  859058 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig
	I0917 00:25:21.450774  859058 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube
	I0917 00:25:21.454199  859058 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0917 00:25:21.459777  859058 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 00:25:21.460132  859058 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:25:21.484816  859058 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0917 00:25:21.484944  859058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:25:21.546571  859058 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-17 00:25:21.53742752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0917 00:25:21.546699  859058 docker.go:318] overlay module found
	I0917 00:25:21.549662  859058 out.go:99] Using the docker driver based on user configuration
	I0917 00:25:21.549711  859058 start.go:304] selected driver: docker
	I0917 00:25:21.549722  859058 start.go:918] validating driver "docker" against <nil>
	I0917 00:25:21.549823  859058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:25:21.602748  859058 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-17 00:25:21.593795633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0917 00:25:21.602906  859058 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 00:25:21.603179  859058 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0917 00:25:21.603337  859058 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 00:25:21.606485  859058 out.go:171] Using Docker driver with root privileges
	I0917 00:25:21.609464  859058 cni.go:84] Creating CNI manager for ""
	I0917 00:25:21.609541  859058 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 00:25:21.609558  859058 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 00:25:21.609648  859058 start.go:348] cluster config:
	{Name:download-only-340192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-340192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:25:21.612695  859058 out.go:99] Starting "download-only-340192" primary control-plane node in "download-only-340192" cluster
	I0917 00:25:21.612729  859058 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:25:21.615599  859058 out.go:99] Pulling base image v0.0.48 ...
	I0917 00:25:21.615628  859058 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0917 00:25:21.615745  859058 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:25:21.631638  859058 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0917 00:25:21.631856  859058 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0917 00:25:21.631960  859058 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0917 00:25:21.679990  859058 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0917 00:25:21.680022  859058 cache.go:58] Caching tarball of preloaded images
	I0917 00:25:21.680195  859058 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0917 00:25:21.683576  859058 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0917 00:25:21.683600  859058 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0917 00:25:21.776754  859058 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21550-857204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0917 00:25:26.562908  859058 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	
	
	* The control-plane node download-only-340192 host does not exist
	  To start a cluster, run: "minikube start -p download-only-340192"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-340192
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (16.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-448624 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-448624 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (16.448732558s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (16.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0917 00:25:51.706529  859053 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0917 00:25:51.706570  859053 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21550-857204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-448624
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-448624: exit status 85 (83.687129ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-340192 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-340192 │ jenkins │ v1.37.0 │ 17 Sep 25 00:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 17 Sep 25 00:25 UTC │ 17 Sep 25 00:25 UTC │
	│ delete  │ -p download-only-340192                                                                                                                                                   │ download-only-340192 │ jenkins │ v1.37.0 │ 17 Sep 25 00:25 UTC │ 17 Sep 25 00:25 UTC │
	│ start   │ -o=json --download-only -p download-only-448624 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-448624 │ jenkins │ v1.37.0 │ 17 Sep 25 00:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/17 00:25:35
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 00:25:35.302488  859268 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:25:35.302613  859268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:25:35.302622  859268 out.go:374] Setting ErrFile to fd 2...
	I0917 00:25:35.302628  859268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:25:35.302881  859268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
	I0917 00:25:35.303288  859268 out.go:368] Setting JSON to true
	I0917 00:25:35.304112  859268 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11274,"bootTime":1758057462,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0917 00:25:35.304178  859268 start.go:140] virtualization:  
	I0917 00:25:35.307470  859268 out.go:99] [download-only-448624] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0917 00:25:35.307686  859268 notify.go:220] Checking for updates...
	I0917 00:25:35.310564  859268 out.go:171] MINIKUBE_LOCATION=21550
	I0917 00:25:35.313528  859268 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:25:35.316375  859268 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig
	I0917 00:25:35.319155  859268 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube
	I0917 00:25:35.322053  859268 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0917 00:25:35.327822  859268 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 00:25:35.328095  859268 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:25:35.350119  859268 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0917 00:25:35.350255  859268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:25:35.413417  859268 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-17 00:25:35.404004963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0917 00:25:35.413544  859268 docker.go:318] overlay module found
	I0917 00:25:35.416551  859268 out.go:99] Using the docker driver based on user configuration
	I0917 00:25:35.416630  859268 start.go:304] selected driver: docker
	I0917 00:25:35.416643  859268 start.go:918] validating driver "docker" against <nil>
	I0917 00:25:35.416739  859268 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:25:35.473005  859268 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-17 00:25:35.464219101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0917 00:25:35.473160  859268 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0917 00:25:35.473435  859268 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0917 00:25:35.473600  859268 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 00:25:35.476815  859268 out.go:171] Using Docker driver with root privileges
	I0917 00:25:35.479604  859268 cni.go:84] Creating CNI manager for ""
	I0917 00:25:35.479677  859268 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0917 00:25:35.479690  859268 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 00:25:35.479772  859268 start.go:348] cluster config:
	{Name:download-only-448624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-448624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:25:35.482712  859268 out.go:99] Starting "download-only-448624" primary control-plane node in "download-only-448624" cluster
	I0917 00:25:35.482736  859268 cache.go:123] Beginning downloading kic base image for docker with crio
	I0917 00:25:35.485627  859268 out.go:99] Pulling base image v0.0.48 ...
	I0917 00:25:35.485658  859268 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:25:35.485837  859268 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0917 00:25:35.501654  859268 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0917 00:25:35.501784  859268 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0917 00:25:35.501809  859268 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0917 00:25:35.501815  859268 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0917 00:25:35.501823  859268 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0917 00:25:35.540755  859268 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0917 00:25:35.540782  859268 cache.go:58] Caching tarball of preloaded images
	I0917 00:25:35.540953  859268 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0917 00:25:35.544042  859268 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0917 00:25:35.544062  859268 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0917 00:25:35.643563  859268 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:36555bb244eebf6e383c5e8810b48b3a -> /home/jenkins/minikube-integration/21550-857204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0917 00:25:49.694836  859268 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0917 00:25:49.694940  859268 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21550-857204/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-448624 host does not exist
	  To start a cluster, run: "minikube start -p download-only-448624"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-448624
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I0917 00:25:53.024137  859053 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-633798 --alsologtostderr --binary-mirror http://127.0.0.1:44367 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-633798" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-633798
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-160127
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-160127: exit status 85 (67.10081ms)

                                                
                                                
-- stdout --
	* Profile "addons-160127" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-160127"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-160127
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-160127: exit status 85 (70.591876ms)

                                                
                                                
-- stdout --
	* Profile "addons-160127" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-160127"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (181.6s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-160127 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-160127 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m1.599685693s)
--- PASS: TestAddons/Setup (181.60s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-160127 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-160127 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-160127 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-160127 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4c07e046-f706-4170-9cc1-e8c605dd21de] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4c07e046-f706-4170-9cc1-e8c605dd21de] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.003990757s
addons_test.go:694: (dbg) Run:  kubectl --context addons-160127 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-160127 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-160127 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-160127 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.125014ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-s86g4" [716ddc93-7693-4193-8385-510c6f2a4e55] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003469138s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-gnjgn" [ab0242cc-c5be-4bbe-afbf-29f473bc10b3] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004004465s
addons_test.go:392: (dbg) Run:  kubectl --context addons-160127 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-160127 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-160127 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.985369076s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 ip
2025/09/17 00:29:32 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.00s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.482563ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-160127
addons_test.go:332: (dbg) Run:  kubectl --context addons-160127 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-qgjkm" [f19bc1e9-1011-48a3-8468-af2167e93cef] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004007828s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.230457ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-m46l4" [6dc054c8-2180-40ae-a4e4-91c4c67ecb0a] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00337467s
addons_test.go:463: (dbg) Run:  kubectl --context addons-160127 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0917 00:29:58.593460  859053 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0917 00:29:58.600692  859053 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0917 00:29:58.600724  859053 kapi.go:107] duration metric: took 7.277053ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.288918ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-160127 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-160127 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [b6535cca-5168-4a45-80e6-0f30049b7654] Pending
helpers_test.go:352: "task-pv-pod" [b6535cca-5168-4a45-80e6-0f30049b7654] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [b6535cca-5168-4a45-80e6-0f30049b7654] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.006098124s
addons_test.go:572: (dbg) Run:  kubectl --context addons-160127 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-160127 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-160127 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-160127 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-160127 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-160127 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-160127 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [518f1a64-8217-4366-84cc-b8ef127068c0] Pending
helpers_test.go:352: "task-pv-pod-restore" [518f1a64-8217-4366-84cc-b8ef127068c0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [518f1a64-8217-4366-84cc-b8ef127068c0] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.009213118s
addons_test.go:614: (dbg) Run:  kubectl --context addons-160127 delete pod task-pv-pod-restore
addons_test.go:614: (dbg) Done: kubectl --context addons-160127 delete pod task-pv-pod-restore: (1.216639361s)
addons_test.go:618: (dbg) Run:  kubectl --context addons-160127 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-160127 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-160127 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.090020014s)
--- PASS: TestAddons/parallel/CSI (61.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-160127 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-160127 --alsologtostderr -v=1: (1.147025891s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-zj2rs" [01f34fc5-c5f5-49e2-bc6f-07d1aa456436] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-zj2rs" [01f34fc5-c5f5-49e2-bc6f-07d1aa456436] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005346039s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-160127 addons disable headlamp --alsologtostderr -v=1: (5.78988853s)
--- PASS: TestAddons/parallel/Headlamp (18.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-ktv96" [77560b72-ec07-43f5-b359-61a66038eb58] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003615849s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-160127 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-160127 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-160127 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [2feb6335-27f9-4785-9ef1-47f9a76e1627] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [2feb6335-27f9-4785-9ef1-47f9a76e1627] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [2feb6335-27f9-4785-9ef1-47f9a76e1627] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003917555s
addons_test.go:967: (dbg) Run:  kubectl --context addons-160127 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 ssh "cat /opt/local-path-provisioner/pvc-e4aa6a01-96f9-4229-b8a9-878dadd04a59_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-160127 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-160127 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-160127 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.185244489s)
--- PASS: TestAddons/parallel/LocalPath (54.74s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.91s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-57955" [7915bd39-2026-4aa6-b307-af66b572bdb7] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005594962s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.91s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-9j9cn" [9b593ac3-29ba-45d9-94e3-21ff7652b1d1] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002792582s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-160127 addons disable yakd --alsologtostderr -v=1: (5.810923457s)
--- PASS: TestAddons/parallel/Yakd (11.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.15s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-160127
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-160127: (11.881654023s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-160127
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-160127
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-160127
--- PASS: TestAddons/StoppedEnableDisable (12.15s)

                                                
                                    
x
+
TestCertOptions (33.71s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-426255 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-426255 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (31.006816572s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-426255 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-426255 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-426255 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-426255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-426255
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-426255: (2.0136492s)
--- PASS: TestCertOptions (33.71s)

                                                
                                    
x
+
TestCertExpiration (253.83s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-275426 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0917 01:23:39.250425  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:23:56.177159  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-275426 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.823611736s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-275426 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-275426 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (28.151049483s)
helpers_test.go:175: Cleaning up "cert-expiration-275426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-275426
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-275426: (2.856241702s)
--- PASS: TestCertExpiration (253.83s)

                                                
                                    
x
+
TestForceSystemdFlag (44.65s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-650478 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-650478 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.943289557s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-650478 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-650478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-650478
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-650478: (2.401673146s)
--- PASS: TestForceSystemdFlag (44.65s)

                                                
                                    
x
+
TestForceSystemdEnv (44.12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-016570 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-016570 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.537485453s)
helpers_test.go:175: Cleaning up "force-systemd-env-016570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-016570
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-016570: (2.58268733s)
--- PASS: TestForceSystemdEnv (44.12s)

                                                
                                    
x
+
TestErrorSpam/setup (32.42s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-872835 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-872835 --driver=docker  --container-runtime=crio
E0917 00:33:56.180747  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:33:56.187102  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:33:56.198487  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:33:56.219852  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:33:56.261248  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:33:56.342685  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:33:56.504187  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:33:56.825929  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:33:57.468021  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:33:58.750096  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:34:01.312702  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-872835 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-872835 --driver=docker  --container-runtime=crio: (32.424093671s)
--- PASS: TestErrorSpam/setup (32.42s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 start --dry-run
E0917 00:34:06.434276  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 pause
--- PASS: TestErrorSpam/pause (1.82s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 stop: (1.223529157s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-872835 --log_dir /tmp/nospam-872835 stop
--- PASS: TestErrorSpam/stop (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21550-857204/.minikube/files/etc/test/nested/copy/859053/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (75.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-619464 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0917 00:34:37.158118  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:35:18.120370  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-619464 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m15.011730664s)
--- PASS: TestFunctional/serial/StartWithProxy (75.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.7s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0917 00:35:32.627346  859053 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-619464 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-619464 --alsologtostderr -v=8: (29.698849623s)
functional_test.go:678: soft start took 29.699358093s for "functional-619464" cluster.
I0917 00:36:02.326489  859053 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (29.70s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-619464 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-619464 cache add registry.k8s.io/pause:3.1: (1.277612677s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-619464 cache add registry.k8s.io/pause:3.3: (1.270763789s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-619464 cache add registry.k8s.io/pause:latest: (1.253677755s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-619464 /tmp/TestFunctionalserialCacheCmdcacheadd_local4211466448/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 cache add minikube-local-cache-test:functional-619464
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 cache delete minikube-local-cache-test:functional-619464
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-619464
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-619464 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (300.269807ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-619464 cache reload: (1.152572866s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 kubectl -- --context functional-619464 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-619464 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-619464 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0917 00:36:40.043047  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-619464 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.811814288s)
functional_test.go:776: restart took 38.811916197s for "functional-619464" cluster.
I0917 00:36:49.580937  859053 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (38.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-619464 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-619464 logs: (1.784108851s)
--- PASS: TestFunctional/serial/LogsCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 logs --file /tmp/TestFunctionalserialLogsFileCmd3117102892/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-619464 logs --file /tmp/TestFunctionalserialLogsFileCmd3117102892/001/logs.txt: (1.785727246s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-619464 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-619464
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-619464: exit status 115 (576.799642ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31468 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-619464 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-619464 config get cpus: exit status 14 (63.174625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-619464 config get cpus: exit status 14 (73.732982ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-619464 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-619464 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 889040: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-619464 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-619464 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (250.632613ms)

                                                
                                                
-- stdout --
	* [functional-619464] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:47:28.563729  888569 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:47:28.564073  888569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:47:28.564110  888569 out.go:374] Setting ErrFile to fd 2...
	I0917 00:47:28.564132  888569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:47:28.565067  888569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
	I0917 00:47:28.565548  888569 out.go:368] Setting JSON to false
	I0917 00:47:28.566513  888569 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12587,"bootTime":1758057462,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0917 00:47:28.566609  888569 start.go:140] virtualization:  
	I0917 00:47:28.569832  888569 out.go:179] * [functional-619464] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0917 00:47:28.572936  888569 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:47:28.573008  888569 notify.go:220] Checking for updates...
	I0917 00:47:28.576974  888569 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:47:28.584718  888569 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig
	I0917 00:47:28.587796  888569 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube
	I0917 00:47:28.590792  888569 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0917 00:47:28.594472  888569 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:47:28.598620  888569 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:47:28.599264  888569 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:47:28.630596  888569 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0917 00:47:28.630714  888569 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:47:28.722155  888569 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-17 00:47:28.707582973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0917 00:47:28.722258  888569 docker.go:318] overlay module found
	I0917 00:47:28.725344  888569 out.go:179] * Using the docker driver based on existing profile
	I0917 00:47:28.728168  888569 start.go:304] selected driver: docker
	I0917 00:47:28.728183  888569 start.go:918] validating driver "docker" against &{Name:functional-619464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-619464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:47:28.728279  888569 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:47:28.731789  888569 out.go:203] 
	W0917 00:47:28.735863  888569 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 00:47:28.738681  888569 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-619464 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-619464 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-619464 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (259.198151ms)

                                                
                                                
-- stdout --
	* [functional-619464] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:47:28.306126  888490 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:47:28.306301  888490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:47:28.306307  888490 out.go:374] Setting ErrFile to fd 2...
	I0917 00:47:28.306312  888490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:47:28.306711  888490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
	I0917 00:47:28.307079  888490 out.go:368] Setting JSON to false
	I0917 00:47:28.308108  888490 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12587,"bootTime":1758057462,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0917 00:47:28.308169  888490 start.go:140] virtualization:  
	I0917 00:47:28.312072  888490 out.go:179] * [functional-619464] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0917 00:47:28.315100  888490 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 00:47:28.315199  888490 notify.go:220] Checking for updates...
	I0917 00:47:28.321954  888490 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 00:47:28.325005  888490 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig
	I0917 00:47:28.328265  888490 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube
	I0917 00:47:28.331089  888490 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0917 00:47:28.334013  888490 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 00:47:28.337399  888490 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:47:28.337984  888490 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 00:47:28.372235  888490 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0917 00:47:28.372352  888490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:47:28.469951  888490 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-17 00:47:28.458198577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0917 00:47:28.470071  888490 docker.go:318] overlay module found
	I0917 00:47:28.473147  888490 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0917 00:47:28.476067  888490 start.go:304] selected driver: docker
	I0917 00:47:28.476085  888490 start.go:918] validating driver "docker" against &{Name:functional-619464 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-619464 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 00:47:28.476182  888490 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 00:47:28.481141  888490 out.go:203] 
	W0917 00:47:28.484080  888490 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 00:47:28.487251  888490 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [eee93eb9-c50a-43ba-8995-446ba61a0e46] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004027505s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-619464 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-619464 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-619464 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-619464 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [41b7ad45-aa6c-4283-82ef-b9db97ec60a3] Pending
helpers_test.go:352: "sp-pod" [41b7ad45-aa6c-4283-82ef-b9db97ec60a3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [41b7ad45-aa6c-4283-82ef-b9db97ec60a3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003343911s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-619464 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-619464 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-619464 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1796e0f8-c684-41e4-a2ba-a2ef6f4b6221] Pending
helpers_test.go:352: "sp-pod" [1796e0f8-c684-41e4-a2ba-a2ef6f4b6221] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003430209s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-619464 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.56s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh -n functional-619464 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 cp functional-619464:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3543103255/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh -n functional-619464 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh -n functional-619464 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/859053/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "sudo cat /etc/test/nested/copy/859053/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/859053.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "sudo cat /etc/ssl/certs/859053.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/859053.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "sudo cat /usr/share/ca-certificates/859053.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/8590532.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "sudo cat /etc/ssl/certs/8590532.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/8590532.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "sudo cat /usr/share/ca-certificates/8590532.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-619464 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-619464 ssh "sudo systemctl is-active docker": exit status 1 (300.564534ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-619464 ssh "sudo systemctl is-active containerd": exit status 1 (272.998559ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-619464 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-619464 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-619464 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 884793: os: process already finished
helpers_test.go:519: unable to terminate pid 884632: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-619464 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-619464 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-619464 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [b687dff8-4dda-4b3c-9870-bc4a90ceaaf1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [b687dff8-4dda-4b3c-9870-bc4a90ceaaf1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.006189759s
I0917 00:37:08.027754  859053 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-619464 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.11.88 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-619464 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "355.164607ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.324489ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "360.063043ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "57.422228ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-619464 /tmp/TestFunctionalparallelMountCmdany-port2235175155/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758070033419084640" to /tmp/TestFunctionalparallelMountCmdany-port2235175155/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758070033419084640" to /tmp/TestFunctionalparallelMountCmdany-port2235175155/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758070033419084640" to /tmp/TestFunctionalparallelMountCmdany-port2235175155/001/test-1758070033419084640
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-619464 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (321.949855ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0917 00:47:13.741329  859053 retry.go:31] will retry after 534.439924ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 17 00:47 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 17 00:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 17 00:47 test-1758070033419084640
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh cat /mount-9p/test-1758070033419084640
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-619464 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [0ae96cb7-da7f-4f59-a2f0-a31234c5b0b9] Pending
helpers_test.go:352: "busybox-mount" [0ae96cb7-da7f-4f59-a2f0-a31234c5b0b9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [0ae96cb7-da7f-4f59-a2f0-a31234c5b0b9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [0ae96cb7-da7f-4f59-a2f0-a31234c5b0b9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.005471644s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-619464 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-619464 /tmp/TestFunctionalparallelMountCmdany-port2235175155/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-619464 /tmp/TestFunctionalparallelMountCmdspecific-port2531894097/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-619464 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.34513ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0917 00:47:22.628358  859053 retry.go:31] will retry after 359.076738ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-619464 /tmp/TestFunctionalparallelMountCmdspecific-port2531894097/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-619464 ssh "sudo umount -f /mount-9p": exit status 1 (356.054138ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-619464 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-619464 /tmp/TestFunctionalparallelMountCmdspecific-port2531894097/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-619464 /tmp/TestFunctionalparallelMountCmdVerifyCleanup891002839/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-619464 /tmp/TestFunctionalparallelMountCmdVerifyCleanup891002839/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-619464 /tmp/TestFunctionalparallelMountCmdVerifyCleanup891002839/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-619464 ssh "findmnt -T" /mount1: exit status 1 (813.73597ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0917 00:47:25.026759  859053 retry.go:31] will retry after 666.10039ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-619464 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-619464 /tmp/TestFunctionalparallelMountCmdVerifyCleanup891002839/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-619464 /tmp/TestFunctionalparallelMountCmdVerifyCleanup891002839/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-619464 /tmp/TestFunctionalparallelMountCmdVerifyCleanup891002839/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 service list -o json
functional_test.go:1504: Took "850.676632ms" to run "out/minikube-linux-arm64 -p functional-619464 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-619464 version -o=json --components: (1.401728236s)
--- PASS: TestFunctional/parallel/Version/components (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-619464 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-619464
localhost/kicbase/echo-server:functional-619464
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-619464 image ls --format short --alsologtostderr:
I0917 00:47:43.364149  890948 out.go:360] Setting OutFile to fd 1 ...
I0917 00:47:43.364329  890948 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:47:43.364336  890948 out.go:374] Setting ErrFile to fd 2...
I0917 00:47:43.364342  890948 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:47:43.364641  890948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
I0917 00:47:43.365353  890948 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:47:43.365562  890948 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:47:43.366050  890948 cli_runner.go:164] Run: docker container inspect functional-619464 --format={{.State.Status}}
I0917 00:47:43.389132  890948 ssh_runner.go:195] Run: systemctl --version
I0917 00:47:43.389195  890948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
I0917 00:47:43.420859  890948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/functional-619464/id_rsa Username:docker}
I0917 00:47:43.521435  890948 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-619464 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ latest             │ 17848b7d08d19 │ 202MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ a25f5ef9c34c3 │ 51.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ 996be7e86d9b3 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ d291939e99406 │ 84.8MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ 6fc32d66c1411 │ 75.9MB │
│ docker.io/library/nginx                 │ alpine             │ 35f3cbee4fb77 │ 54.3MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/minikube-local-cache-test     │ functional-619464  │ 53d303f6a8d75 │ 3.33kB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ localhost/kicbase/echo-server           │ functional-619464  │ ce2d2cda2d858 │ 4.79MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-619464 image ls --format table --alsologtostderr:
I0917 00:47:44.096722  891160 out.go:360] Setting OutFile to fd 1 ...
I0917 00:47:44.096838  891160 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:47:44.096849  891160 out.go:374] Setting ErrFile to fd 2...
I0917 00:47:44.096856  891160 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:47:44.097148  891160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
I0917 00:47:44.097755  891160 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:47:44.097876  891160 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:47:44.098333  891160 cli_runner.go:164] Run: docker container inspect functional-619464 --format={{.State.Status}}
I0917 00:47:44.118008  891160 ssh_runner.go:195] Run: systemctl --version
I0917 00:47:44.118073  891160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
I0917 00:47:44.141535  891160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/functional-619464/id_rsa Username:docker}
I0917 00:47:44.245654  891160 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-619464 image ls --format json --alsologtostderr:
[{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-619464"],"size":"4788229"},{"id":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"72629077"},{"id":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"
size":"75938711"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"53d303f6a8d
75dced5c02c3c1cbdd1a3cbb3ae585327c36fcfcbe00c215cd841","repoDigests":["localhost/minikube-local-cache-test@sha256:0668822e7acd1f3a9e28b921edd8e538605775fbd4b4faf5d7baf69ae6818f44"],"repoTags":["localhost/minikube-local-cache-test:functional-619464"],"size":"3330"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289
502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54348302"},{"id":"17848b7d08d196d4e7b420f48ba286132a07937574561d4a6c085651f5177f97","repoDigests":["docker.io/li
brary/nginx@sha256:059ceb5a1ded7032703d6290061911adc8a9c55298f372daaf63801600ec894e","docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e"],"repoTags":["docker.io/library/nginx:latest"],"size":"202036629"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":["regis
try.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"84818927"},{"id":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":["registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"51592021"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6c
eeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-619464 image ls --format json --alsologtostderr:
I0917 00:47:43.817668  891081 out.go:360] Setting OutFile to fd 1 ...
I0917 00:47:43.817864  891081 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:47:43.817892  891081 out.go:374] Setting ErrFile to fd 2...
I0917 00:47:43.817910  891081 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:47:43.818237  891081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
I0917 00:47:43.819888  891081 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:47:43.820079  891081 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:47:43.820626  891081 cli_runner.go:164] Run: docker container inspect functional-619464 --format={{.State.Status}}
I0917 00:47:43.841877  891081 ssh_runner.go:195] Run: systemctl --version
I0917 00:47:43.841936  891081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
I0917 00:47:43.861423  891081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/functional-619464/id_rsa Username:docker}
I0917 00:47:43.957664  891081 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-619464 image ls --format yaml --alsologtostderr:
- id: 17848b7d08d196d4e7b420f48ba286132a07937574561d4a6c085651f5177f97
repoDigests:
- docker.io/library/nginx@sha256:059ceb5a1ded7032703d6290061911adc8a9c55298f372daaf63801600ec894e
- docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e
repoTags:
- docker.io/library/nginx:latest
size: "202036629"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-619464
size: "4788229"
- id: 996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "72629077"
- id: 6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "75938711"
- id: a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "51592021"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "84818927"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac
repoTags:
- docker.io/library/nginx:alpine
size: "54348302"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 53d303f6a8d75dced5c02c3c1cbdd1a3cbb3ae585327c36fcfcbe00c215cd841
repoDigests:
- localhost/minikube-local-cache-test@sha256:0668822e7acd1f3a9e28b921edd8e538605775fbd4b4faf5d7baf69ae6818f44
repoTags:
- localhost/minikube-local-cache-test:functional-619464
size: "3330"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-619464 image ls --format yaml --alsologtostderr:
I0917 00:47:43.530004  891010 out.go:360] Setting OutFile to fd 1 ...
I0917 00:47:43.536733  891010 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:47:43.536791  891010 out.go:374] Setting ErrFile to fd 2...
I0917 00:47:43.536811  891010 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:47:43.537133  891010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
I0917 00:47:43.537844  891010 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:47:43.538061  891010 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:47:43.538571  891010 cli_runner.go:164] Run: docker container inspect functional-619464 --format={{.State.Status}}
I0917 00:47:43.561931  891010 ssh_runner.go:195] Run: systemctl --version
I0917 00:47:43.561990  891010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
I0917 00:47:43.586537  891010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/functional-619464/id_rsa Username:docker}
I0917 00:47:43.689459  891010 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-619464 ssh pgrep buildkitd: exit status 1 (323.865527ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image build -t localhost/my-image:functional-619464 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-619464 image build -t localhost/my-image:functional-619464 testdata/build --alsologtostderr: (3.473613729s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-619464 image build -t localhost/my-image:functional-619464 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 59a8368bf55
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-619464
--> f81535982d0
Successfully tagged localhost/my-image:functional-619464
f81535982d060563b9ba0e3a8dd75aa2f9c85937cba2ec75634c4f936bbaea7d
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-619464 image build -t localhost/my-image:functional-619464 testdata/build --alsologtostderr:
I0917 00:47:43.979716  891132 out.go:360] Setting OutFile to fd 1 ...
I0917 00:47:43.980256  891132 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:47:43.980267  891132 out.go:374] Setting ErrFile to fd 2...
I0917 00:47:43.980272  891132 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0917 00:47:43.980622  891132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
I0917 00:47:43.981297  891132 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:47:43.982012  891132 config.go:182] Loaded profile config "functional-619464": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0917 00:47:43.982514  891132 cli_runner.go:164] Run: docker container inspect functional-619464 --format={{.State.Status}}
I0917 00:47:44.001453  891132 ssh_runner.go:195] Run: systemctl --version
I0917 00:47:44.001528  891132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-619464
I0917 00:47:44.024504  891132 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/functional-619464/id_rsa Username:docker}
I0917 00:47:44.128928  891132 build_images.go:161] Building image from path: /tmp/build.313728608.tar
I0917 00:47:44.129112  891132 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 00:47:44.139618  891132 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.313728608.tar
I0917 00:47:44.152880  891132 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.313728608.tar: stat -c "%s %y" /var/lib/minikube/build/build.313728608.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.313728608.tar': No such file or directory
I0917 00:47:44.152912  891132 ssh_runner.go:362] scp /tmp/build.313728608.tar --> /var/lib/minikube/build/build.313728608.tar (3072 bytes)
I0917 00:47:44.181558  891132 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.313728608
I0917 00:47:44.192777  891132 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.313728608 -xf /var/lib/minikube/build/build.313728608.tar
I0917 00:47:44.202321  891132 crio.go:315] Building image: /var/lib/minikube/build/build.313728608
I0917 00:47:44.202387  891132 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-619464 /var/lib/minikube/build/build.313728608 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0917 00:47:47.366724  891132 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-619464 /var/lib/minikube/build/build.313728608 --cgroup-manager=cgroupfs: (3.164313969s)
I0917 00:47:47.366808  891132 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.313728608
I0917 00:47:47.375939  891132 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.313728608.tar
I0917 00:47:47.385113  891132 build_images.go:217] Built localhost/my-image:functional-619464 from /tmp/build.313728608.tar
I0917 00:47:47.385144  891132 build_images.go:133] succeeded building to: functional-619464
I0917 00:47:47.385150  891132 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-619464
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image load --daemon kicbase/echo-server:functional-619464 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-619464 image load --daemon kicbase/echo-server:functional-619464 --alsologtostderr: (1.36276651s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image load --daemon kicbase/echo-server:functional-619464 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-619464 image load --daemon kicbase/echo-server:functional-619464 --alsologtostderr: (2.455841376s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-619464
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image load --daemon kicbase/echo-server:functional-619464 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-619464 image load --daemon kicbase/echo-server:functional-619464 --alsologtostderr: (1.067679409s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image ls
2025/09/17 00:47:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image save kicbase/echo-server:functional-619464 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image rm kicbase/echo-server:functional-619464 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-619464
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-619464 image save --daemon kicbase/echo-server:functional-619464 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-619464
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-619464
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-619464
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-619464
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0917 00:48:56.182634  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:50:19.246877  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m19.985045806s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (200.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 kubectl -- rollout status deployment/busybox: (6.977008964s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-2zsx9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-dffrc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-zgcr5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-2zsx9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-dffrc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-zgcr5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-2zsx9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-dffrc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-zgcr5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-2zsx9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-2zsx9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-dffrc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-dffrc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-zgcr5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 kubectl -- exec busybox-7b57f96db7-zgcr5 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 node add --alsologtostderr -v 5
E0917 00:51:58.391275  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:58.397738  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:58.409140  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:58.430596  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:58.472103  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:58.553551  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:58.714887  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:59.036571  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:51:59.678746  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:52:00.960263  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:52:03.522359  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:52:08.643684  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:52:18.885783  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 node add --alsologtostderr -v 5: (59.484658108s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-175390 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.103498174s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 status --output json --alsologtostderr -v 5: (1.058608587s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp testdata/cp-test.txt ha-175390:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile210508591/001/cp-test_ha-175390.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390:/home/docker/cp-test.txt ha-175390-m02:/home/docker/cp-test_ha-175390_ha-175390-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m02 "sudo cat /home/docker/cp-test_ha-175390_ha-175390-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390:/home/docker/cp-test.txt ha-175390-m03:/home/docker/cp-test_ha-175390_ha-175390-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m03 "sudo cat /home/docker/cp-test_ha-175390_ha-175390-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390:/home/docker/cp-test.txt ha-175390-m04:/home/docker/cp-test_ha-175390_ha-175390-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m04 "sudo cat /home/docker/cp-test_ha-175390_ha-175390-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp testdata/cp-test.txt ha-175390-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile210508591/001/cp-test_ha-175390-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390-m02:/home/docker/cp-test.txt ha-175390:/home/docker/cp-test_ha-175390-m02_ha-175390.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390 "sudo cat /home/docker/cp-test_ha-175390-m02_ha-175390.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390-m02:/home/docker/cp-test.txt ha-175390-m03:/home/docker/cp-test_ha-175390-m02_ha-175390-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m03 "sudo cat /home/docker/cp-test_ha-175390-m02_ha-175390-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390-m02:/home/docker/cp-test.txt ha-175390-m04:/home/docker/cp-test_ha-175390-m02_ha-175390-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m04 "sudo cat /home/docker/cp-test_ha-175390-m02_ha-175390-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp testdata/cp-test.txt ha-175390-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile210508591/001/cp-test_ha-175390-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390-m03:/home/docker/cp-test.txt ha-175390:/home/docker/cp-test_ha-175390-m03_ha-175390.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390 "sudo cat /home/docker/cp-test_ha-175390-m03_ha-175390.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390-m03:/home/docker/cp-test.txt ha-175390-m02:/home/docker/cp-test_ha-175390-m03_ha-175390-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m02 "sudo cat /home/docker/cp-test_ha-175390-m03_ha-175390-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390-m03:/home/docker/cp-test.txt ha-175390-m04:/home/docker/cp-test_ha-175390-m03_ha-175390-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m04 "sudo cat /home/docker/cp-test_ha-175390-m03_ha-175390-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp testdata/cp-test.txt ha-175390-m04:/home/docker/cp-test.txt
E0917 00:52:39.367477  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile210508591/001/cp-test_ha-175390-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390-m04:/home/docker/cp-test.txt ha-175390:/home/docker/cp-test_ha-175390-m04_ha-175390.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390 "sudo cat /home/docker/cp-test_ha-175390-m04_ha-175390.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390-m04:/home/docker/cp-test.txt ha-175390-m02:/home/docker/cp-test_ha-175390-m04_ha-175390-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m02 "sudo cat /home/docker/cp-test_ha-175390-m04_ha-175390-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 cp ha-175390-m04:/home/docker/cp-test.txt ha-175390-m03:/home/docker/cp-test_ha-175390-m04_ha-175390-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 ssh -n ha-175390-m03 "sudo cat /home/docker/cp-test_ha-175390-m04_ha-175390-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (3.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 node stop m02 --alsologtostderr -v 5: (2.264478647s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-175390 status --alsologtostderr -v 5: exit status 7 (774.222456ms)

                                                
                                                
-- stdout --
	ha-175390
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175390-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-175390-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175390-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:52:46.064948  907107 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:52:46.065189  907107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:52:46.065214  907107 out.go:374] Setting ErrFile to fd 2...
	I0917 00:52:46.065244  907107 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:52:46.065619  907107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
	I0917 00:52:46.065906  907107 out.go:368] Setting JSON to false
	I0917 00:52:46.065962  907107 mustload.go:65] Loading cluster: ha-175390
	I0917 00:52:46.066468  907107 config.go:182] Loaded profile config "ha-175390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:52:46.066518  907107 status.go:174] checking status of ha-175390 ...
	I0917 00:52:46.067208  907107 cli_runner.go:164] Run: docker container inspect ha-175390 --format={{.State.Status}}
	I0917 00:52:46.067409  907107 notify.go:220] Checking for updates...
	I0917 00:52:46.088656  907107 status.go:371] ha-175390 host status = "Running" (err=<nil>)
	I0917 00:52:46.088683  907107 host.go:66] Checking if "ha-175390" exists ...
	I0917 00:52:46.088982  907107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-175390
	I0917 00:52:46.122604  907107 host.go:66] Checking if "ha-175390" exists ...
	I0917 00:52:46.122928  907107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:52:46.122990  907107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-175390
	I0917 00:52:46.144517  907107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33573 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/ha-175390/id_rsa Username:docker}
	I0917 00:52:46.242305  907107 ssh_runner.go:195] Run: systemctl --version
	I0917 00:52:46.246859  907107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:52:46.259063  907107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 00:52:46.324472  907107 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-17 00:52:46.315175252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0917 00:52:46.325060  907107 kubeconfig.go:125] found "ha-175390" server: "https://192.168.49.254:8443"
	I0917 00:52:46.325101  907107 api_server.go:166] Checking apiserver status ...
	I0917 00:52:46.325155  907107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:52:46.337157  907107 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1466/cgroup
	I0917 00:52:46.347227  907107 api_server.go:182] apiserver freezer: "12:freezer:/docker/2f7679e13c0272c8e805d624246db0a7dd8b86a42a3bd6d2e306245a90e001b7/crio/crio-4ad5e4bdb8efd02317db6a90e2125ea176a8e4da99ad94639ac5909a223e0d3d"
	I0917 00:52:46.347307  907107 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2f7679e13c0272c8e805d624246db0a7dd8b86a42a3bd6d2e306245a90e001b7/crio/crio-4ad5e4bdb8efd02317db6a90e2125ea176a8e4da99ad94639ac5909a223e0d3d/freezer.state
	I0917 00:52:46.356476  907107 api_server.go:204] freezer state: "THAWED"
	I0917 00:52:46.356509  907107 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:52:46.365122  907107 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:52:46.365150  907107 status.go:463] ha-175390 apiserver status = Running (err=<nil>)
	I0917 00:52:46.365188  907107 status.go:176] ha-175390 status: &{Name:ha-175390 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:52:46.365213  907107 status.go:174] checking status of ha-175390-m02 ...
	I0917 00:52:46.365585  907107 cli_runner.go:164] Run: docker container inspect ha-175390-m02 --format={{.State.Status}}
	I0917 00:52:46.388376  907107 status.go:371] ha-175390-m02 host status = "Stopped" (err=<nil>)
	I0917 00:52:46.388398  907107 status.go:384] host is not running, skipping remaining checks
	I0917 00:52:46.388405  907107 status.go:176] ha-175390-m02 status: &{Name:ha-175390-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:52:46.388433  907107 status.go:174] checking status of ha-175390-m03 ...
	I0917 00:52:46.388927  907107 cli_runner.go:164] Run: docker container inspect ha-175390-m03 --format={{.State.Status}}
	I0917 00:52:46.406849  907107 status.go:371] ha-175390-m03 host status = "Running" (err=<nil>)
	I0917 00:52:46.406876  907107 host.go:66] Checking if "ha-175390-m03" exists ...
	I0917 00:52:46.407186  907107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-175390-m03
	I0917 00:52:46.425189  907107 host.go:66] Checking if "ha-175390-m03" exists ...
	I0917 00:52:46.425518  907107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:52:46.425566  907107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-175390-m03
	I0917 00:52:46.446939  907107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33583 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/ha-175390-m03/id_rsa Username:docker}
	I0917 00:52:46.542014  907107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:52:46.558571  907107 kubeconfig.go:125] found "ha-175390" server: "https://192.168.49.254:8443"
	I0917 00:52:46.558599  907107 api_server.go:166] Checking apiserver status ...
	I0917 00:52:46.558646  907107 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 00:52:46.570159  907107 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1329/cgroup
	I0917 00:52:46.579486  907107 api_server.go:182] apiserver freezer: "12:freezer:/docker/c2a9db73d5f3d7eb9da20a248d0a6508d0b5c57b44617ab0055f12e035fd7809/crio/crio-b045f63960fe7ca06f18a5e97de7a0b97d1a2e2f388c8abb20aa15e9800e159b"
	I0917 00:52:46.579556  907107 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c2a9db73d5f3d7eb9da20a248d0a6508d0b5c57b44617ab0055f12e035fd7809/crio/crio-b045f63960fe7ca06f18a5e97de7a0b97d1a2e2f388c8abb20aa15e9800e159b/freezer.state
	I0917 00:52:46.588873  907107 api_server.go:204] freezer state: "THAWED"
	I0917 00:52:46.588964  907107 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 00:52:46.599413  907107 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 00:52:46.599455  907107 status.go:463] ha-175390-m03 apiserver status = Running (err=<nil>)
	I0917 00:52:46.599466  907107 status.go:176] ha-175390-m03 status: &{Name:ha-175390-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:52:46.599508  907107 status.go:174] checking status of ha-175390-m04 ...
	I0917 00:52:46.599881  907107 cli_runner.go:164] Run: docker container inspect ha-175390-m04 --format={{.State.Status}}
	I0917 00:52:46.618504  907107 status.go:371] ha-175390-m04 host status = "Running" (err=<nil>)
	I0917 00:52:46.618553  907107 host.go:66] Checking if "ha-175390-m04" exists ...
	I0917 00:52:46.619002  907107 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-175390-m04
	I0917 00:52:46.639116  907107 host.go:66] Checking if "ha-175390-m04" exists ...
	I0917 00:52:46.639465  907107 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 00:52:46.639519  907107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-175390-m04
	I0917 00:52:46.662084  907107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33588 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/ha-175390-m04/id_rsa Username:docker}
	I0917 00:52:46.757767  907107 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 00:52:46.770530  907107 status.go:176] ha-175390-m04 status: &{Name:ha-175390-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (3.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 node start m02 --alsologtostderr -v 5: (28.953704677s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 status --alsologtostderr -v 5: (1.292301876s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.347712785s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (124.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 stop --alsologtostderr -v 5
E0917 00:53:20.329714  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 stop --alsologtostderr -v 5: (26.813699732s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 start --wait true --alsologtostderr -v 5
E0917 00:53:56.176768  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:54:42.251568  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 start --wait true --alsologtostderr -v 5: (1m37.225548604s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (124.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 node delete m03 --alsologtostderr -v 5: (11.710230904s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 stop --alsologtostderr -v 5: (35.814401949s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-175390 status --alsologtostderr -v 5: exit status 7 (121.609031ms)

                                                
                                                
-- stdout --
	ha-175390
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-175390-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-175390-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 00:56:12.756920  920983 out.go:360] Setting OutFile to fd 1 ...
	I0917 00:56:12.757092  920983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:56:12.757101  920983 out.go:374] Setting ErrFile to fd 2...
	I0917 00:56:12.757107  920983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 00:56:12.757368  920983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
	I0917 00:56:12.757566  920983 out.go:368] Setting JSON to false
	I0917 00:56:12.757594  920983 mustload.go:65] Loading cluster: ha-175390
	I0917 00:56:12.757652  920983 notify.go:220] Checking for updates...
	I0917 00:56:12.758055  920983 config.go:182] Loaded profile config "ha-175390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 00:56:12.758078  920983 status.go:174] checking status of ha-175390 ...
	I0917 00:56:12.758971  920983 cli_runner.go:164] Run: docker container inspect ha-175390 --format={{.State.Status}}
	I0917 00:56:12.778319  920983 status.go:371] ha-175390 host status = "Stopped" (err=<nil>)
	I0917 00:56:12.778342  920983 status.go:384] host is not running, skipping remaining checks
	I0917 00:56:12.778355  920983 status.go:176] ha-175390 status: &{Name:ha-175390 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:56:12.778380  920983 status.go:174] checking status of ha-175390-m02 ...
	I0917 00:56:12.778686  920983 cli_runner.go:164] Run: docker container inspect ha-175390-m02 --format={{.State.Status}}
	I0917 00:56:12.796227  920983 status.go:371] ha-175390-m02 host status = "Stopped" (err=<nil>)
	I0917 00:56:12.796310  920983 status.go:384] host is not running, skipping remaining checks
	I0917 00:56:12.796317  920983 status.go:176] ha-175390-m02 status: &{Name:ha-175390-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 00:56:12.796339  920983 status.go:174] checking status of ha-175390-m04 ...
	I0917 00:56:12.796715  920983 cli_runner.go:164] Run: docker container inspect ha-175390-m04 --format={{.State.Status}}
	I0917 00:56:12.825338  920983 status.go:371] ha-175390-m04 host status = "Stopped" (err=<nil>)
	I0917 00:56:12.825371  920983 status.go:384] host is not running, skipping remaining checks
	I0917 00:56:12.825377  920983 status.go:176] ha-175390-m04 status: &{Name:ha-175390-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (82.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0917 00:56:58.391183  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 00:57:26.093934  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m21.382852464s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (82.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 node add --control-plane --alsologtostderr -v 5: (1m9.013874194s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-175390 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-175390 status --alsologtostderr -v 5: (1.136620245s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.008235644s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.49s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-799665 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0917 00:58:56.176375  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-799665 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m16.487429599s)
--- PASS: TestJSONOutput/start/Command (76.49s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-799665 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-799665 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-799665 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-799665 --output=json --user=testUser: (5.8593648s)
--- PASS: TestJSONOutput/stop/Command (5.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-824192 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-824192 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (101.904233ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"51e55189-22f6-481a-beba-4e29e919350c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-824192] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cbdf447d-6527-41d6-816c-f3bc477623bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21550"}}
	{"specversion":"1.0","id":"3a8e2e68-03a3-400d-beb5-3b0a1e9b4d1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"92d11284-f0d9-4ce3-8524-d7daf26bc784","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig"}}
	{"specversion":"1.0","id":"2c33ba14-8cd4-41af-9b54-14cc33604bc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube"}}
	{"specversion":"1.0","id":"0a4da4ee-2dc5-4aaf-86a5-0105838dfc80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8501126d-db0a-497e-ba1e-8eb82b12e630","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"109c31e9-9605-41e6-bbc1-f94d81b7bfd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-824192" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-824192
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (48.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-628077 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-628077 --network=: (46.003287531s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-628077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-628077
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-628077: (2.064932399s)
--- PASS: TestKicCustomNetwork/create_custom_network (48.09s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-769362 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-769362 --network=bridge: (34.541389073s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-769362" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-769362
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-769362: (1.988301901s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.55s)

                                                
                                    
x
+
TestKicExistingNetwork (34.15s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0917 01:01:48.673067  859053 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0917 01:01:48.688111  859053 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0917 01:01:48.689245  859053 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0917 01:01:48.689286  859053 cli_runner.go:164] Run: docker network inspect existing-network
W0917 01:01:48.705200  859053 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0917 01:01:48.705235  859053 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0917 01:01:48.705250  859053 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0917 01:01:48.705357  859053 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0917 01:01:48.727296  859053 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8a60558b6cfd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:46:74:df:ea:1e:47} reservation:<nil>}
I0917 01:01:48.727716  859053 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001795160}
I0917 01:01:48.727743  859053 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0917 01:01:48.727795  859053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0917 01:01:48.785312  859053 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-335725 --network=existing-network
E0917 01:01:58.391239  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-335725 --network=existing-network: (31.952449162s)
helpers_test.go:175: Cleaning up "existing-network-335725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-335725
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-335725: (2.048815376s)
I0917 01:02:22.803178  859053 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.15s)

                                                
                                    
x
+
TestKicCustomSubnet (33.66s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-318135 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-318135 --subnet=192.168.60.0/24: (31.590939313s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-318135 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-318135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-318135
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-318135: (2.050674598s)
--- PASS: TestKicCustomSubnet (33.66s)

                                                
                                    
x
+
TestKicStaticIP (34.83s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-049263 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-049263 --static-ip=192.168.200.200: (32.57912696s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-049263 ip
helpers_test.go:175: Cleaning up "static-ip-049263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-049263
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-049263: (2.097284167s)
--- PASS: TestKicStaticIP (34.83s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (72.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-107541 --driver=docker  --container-runtime=crio
E0917 01:03:56.180740  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-107541 --driver=docker  --container-runtime=crio: (30.848849586s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-110388 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-110388 --driver=docker  --container-runtime=crio: (36.456515829s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-107541
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-110388
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-110388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-110388
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-110388: (1.94954162s)
helpers_test.go:175: Cleaning up "first-107541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-107541
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-107541: (1.984087997s)
--- PASS: TestMinikubeProfile (72.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-419709 --memory=3072 --mount-string /tmp/TestMountStartserial1878427207/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-419709 --memory=3072 --mount-string /tmp/TestMountStartserial1878427207/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.216770141s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-419709 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-421859 --memory=3072 --mount-string /tmp/TestMountStartserial1878427207/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-421859 --memory=3072 --mount-string /tmp/TestMountStartserial1878427207/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.639742653s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-421859 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-419709 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-419709 --alsologtostderr -v=5: (1.727972435s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-421859 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-421859
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-421859: (1.214259652s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.26s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-421859
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-421859: (7.261534378s)
--- PASS: TestMountStart/serial/RestartStopped (8.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-421859 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (134.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-371596 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0917 01:06:58.391242  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:06:59.249051  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-371596 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m13.826972963s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (134.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-371596 -- rollout status deployment/busybox: (4.022827906s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- exec busybox-7b57f96db7-tcthx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- exec busybox-7b57f96db7-v7vfv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- exec busybox-7b57f96db7-tcthx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- exec busybox-7b57f96db7-v7vfv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- exec busybox-7b57f96db7-tcthx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- exec busybox-7b57f96db7-v7vfv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.05s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- exec busybox-7b57f96db7-tcthx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- exec busybox-7b57f96db7-tcthx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- exec busybox-7b57f96db7-v7vfv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-371596 -- exec busybox-7b57f96db7-v7vfv -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (55.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-371596 -v=5 --alsologtostderr
E0917 01:08:21.456286  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-371596 -v=5 --alsologtostderr: (54.339440734s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (55.03s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-371596 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 cp testdata/cp-test.txt multinode-371596:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 cp multinode-371596:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1075489947/001/cp-test_multinode-371596.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 cp multinode-371596:/home/docker/cp-test.txt multinode-371596-m02:/home/docker/cp-test_multinode-371596_multinode-371596-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596-m02 "sudo cat /home/docker/cp-test_multinode-371596_multinode-371596-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 cp multinode-371596:/home/docker/cp-test.txt multinode-371596-m03:/home/docker/cp-test_multinode-371596_multinode-371596-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596-m03 "sudo cat /home/docker/cp-test_multinode-371596_multinode-371596-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 cp testdata/cp-test.txt multinode-371596-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 cp multinode-371596-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1075489947/001/cp-test_multinode-371596-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 cp multinode-371596-m02:/home/docker/cp-test.txt multinode-371596:/home/docker/cp-test_multinode-371596-m02_multinode-371596.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596 "sudo cat /home/docker/cp-test_multinode-371596-m02_multinode-371596.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 cp multinode-371596-m02:/home/docker/cp-test.txt multinode-371596-m03:/home/docker/cp-test_multinode-371596-m02_multinode-371596-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596-m03 "sudo cat /home/docker/cp-test_multinode-371596-m02_multinode-371596-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 cp testdata/cp-test.txt multinode-371596-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 cp multinode-371596-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1075489947/001/cp-test_multinode-371596-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 cp multinode-371596-m03:/home/docker/cp-test.txt multinode-371596:/home/docker/cp-test_multinode-371596-m03_multinode-371596.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596 "sudo cat /home/docker/cp-test_multinode-371596-m03_multinode-371596.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 cp multinode-371596-m03:/home/docker/cp-test.txt multinode-371596-m02:/home/docker/cp-test_multinode-371596-m03_multinode-371596-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 ssh -n multinode-371596-m02 "sudo cat /home/docker/cp-test_multinode-371596-m03_multinode-371596-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-371596 node stop m03: (1.209751218s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-371596 status: exit status 7 (541.565131ms)

                                                
                                                
-- stdout --
	multinode-371596
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-371596-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-371596-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-371596 status --alsologtostderr: exit status 7 (520.20594ms)

                                                
                                                
-- stdout --
	multinode-371596
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-371596-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-371596-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:08:41.263140  974288 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:08:41.263336  974288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:08:41.263362  974288 out.go:374] Setting ErrFile to fd 2...
	I0917 01:08:41.263380  974288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:08:41.263669  974288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
	I0917 01:08:41.263901  974288 out.go:368] Setting JSON to false
	I0917 01:08:41.263960  974288 mustload.go:65] Loading cluster: multinode-371596
	I0917 01:08:41.263992  974288 notify.go:220] Checking for updates...
	I0917 01:08:41.264511  974288 config.go:182] Loaded profile config "multinode-371596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:08:41.264866  974288 status.go:174] checking status of multinode-371596 ...
	I0917 01:08:41.265476  974288 cli_runner.go:164] Run: docker container inspect multinode-371596 --format={{.State.Status}}
	I0917 01:08:41.285400  974288 status.go:371] multinode-371596 host status = "Running" (err=<nil>)
	I0917 01:08:41.285428  974288 host.go:66] Checking if "multinode-371596" exists ...
	I0917 01:08:41.285724  974288 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-371596
	I0917 01:08:41.310145  974288 host.go:66] Checking if "multinode-371596" exists ...
	I0917 01:08:41.310476  974288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 01:08:41.310527  974288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-371596
	I0917 01:08:41.331370  974288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33693 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/multinode-371596/id_rsa Username:docker}
	I0917 01:08:41.426438  974288 ssh_runner.go:195] Run: systemctl --version
	I0917 01:08:41.430847  974288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 01:08:41.442529  974288 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:08:41.506886  974288 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-17 01:08:41.497436079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0917 01:08:41.507448  974288 kubeconfig.go:125] found "multinode-371596" server: "https://192.168.67.2:8443"
	I0917 01:08:41.507489  974288 api_server.go:166] Checking apiserver status ...
	I0917 01:08:41.507533  974288 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 01:08:41.519145  974288 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1430/cgroup
	I0917 01:08:41.528735  974288 api_server.go:182] apiserver freezer: "12:freezer:/docker/e83c82ab150db4168ccd5036e827ed6a51fb5848e69e0d22d89c2e7d10bfb723/crio/crio-bc2a8738ae402d421d0d608adc7c8e55f090351524fd546eed451af6c585b34a"
	I0917 01:08:41.528812  974288 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e83c82ab150db4168ccd5036e827ed6a51fb5848e69e0d22d89c2e7d10bfb723/crio/crio-bc2a8738ae402d421d0d608adc7c8e55f090351524fd546eed451af6c585b34a/freezer.state
	I0917 01:08:41.537682  974288 api_server.go:204] freezer state: "THAWED"
	I0917 01:08:41.537708  974288 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0917 01:08:41.545974  974288 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0917 01:08:41.546004  974288 status.go:463] multinode-371596 apiserver status = Running (err=<nil>)
	I0917 01:08:41.546015  974288 status.go:176] multinode-371596 status: &{Name:multinode-371596 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 01:08:41.546053  974288 status.go:174] checking status of multinode-371596-m02 ...
	I0917 01:08:41.546387  974288 cli_runner.go:164] Run: docker container inspect multinode-371596-m02 --format={{.State.Status}}
	I0917 01:08:41.563603  974288 status.go:371] multinode-371596-m02 host status = "Running" (err=<nil>)
	I0917 01:08:41.563627  974288 host.go:66] Checking if "multinode-371596-m02" exists ...
	I0917 01:08:41.563942  974288 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-371596-m02
	I0917 01:08:41.581420  974288 host.go:66] Checking if "multinode-371596-m02" exists ...
	I0917 01:08:41.581729  974288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 01:08:41.581772  974288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-371596-m02
	I0917 01:08:41.601523  974288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33698 SSHKeyPath:/home/jenkins/minikube-integration/21550-857204/.minikube/machines/multinode-371596-m02/id_rsa Username:docker}
	I0917 01:08:41.697506  974288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 01:08:41.708927  974288 status.go:176] multinode-371596-m02 status: &{Name:multinode-371596-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 01:08:41.709010  974288 status.go:174] checking status of multinode-371596-m03 ...
	I0917 01:08:41.709347  974288 cli_runner.go:164] Run: docker container inspect multinode-371596-m03 --format={{.State.Status}}
	I0917 01:08:41.726836  974288 status.go:371] multinode-371596-m03 host status = "Stopped" (err=<nil>)
	I0917 01:08:41.726925  974288 status.go:384] host is not running, skipping remaining checks
	I0917 01:08:41.726933  974288 status.go:176] multinode-371596-m03 status: &{Name:multinode-371596-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-371596 node start m03 -v=5 --alsologtostderr: (7.452268507s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-371596
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-371596
E0917 01:08:56.179452  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-371596: (24.765117487s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-371596 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-371596 --wait=true -v=5 --alsologtostderr: (54.08774199s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-371596
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-371596 node delete m03: (4.912932437s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-371596 stop: (23.634771537s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-371596 status: exit status 7 (104.099131ms)

                                                
                                                
-- stdout --
	multinode-371596
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-371596-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-371596 status --alsologtostderr: exit status 7 (86.713824ms)

                                                
                                                
-- stdout --
	multinode-371596
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-371596-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:10:38.306144  982282 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:10:38.306262  982282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:10:38.306274  982282 out.go:374] Setting ErrFile to fd 2...
	I0917 01:10:38.306280  982282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:10:38.306534  982282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
	I0917 01:10:38.306714  982282 out.go:368] Setting JSON to false
	I0917 01:10:38.306761  982282 mustload.go:65] Loading cluster: multinode-371596
	I0917 01:10:38.306839  982282 notify.go:220] Checking for updates...
	I0917 01:10:38.307924  982282 config.go:182] Loaded profile config "multinode-371596": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:10:38.307952  982282 status.go:174] checking status of multinode-371596 ...
	I0917 01:10:38.308615  982282 cli_runner.go:164] Run: docker container inspect multinode-371596 --format={{.State.Status}}
	I0917 01:10:38.326275  982282 status.go:371] multinode-371596 host status = "Stopped" (err=<nil>)
	I0917 01:10:38.326295  982282 status.go:384] host is not running, skipping remaining checks
	I0917 01:10:38.326302  982282 status.go:176] multinode-371596 status: &{Name:multinode-371596 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 01:10:38.326327  982282 status.go:174] checking status of multinode-371596-m02 ...
	I0917 01:10:38.326660  982282 cli_runner.go:164] Run: docker container inspect multinode-371596-m02 --format={{.State.Status}}
	I0917 01:10:38.346792  982282 status.go:371] multinode-371596-m02 host status = "Stopped" (err=<nil>)
	I0917 01:10:38.346812  982282 status.go:384] host is not running, skipping remaining checks
	I0917 01:10:38.346819  982282 status.go:176] multinode-371596-m02 status: &{Name:multinode-371596-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-371596 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-371596 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (57.015678643s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-371596 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-371596
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-371596-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-371596-m02 --driver=docker  --container-runtime=crio: exit status 14 (88.872533ms)

                                                
                                                
-- stdout --
	* [multinode-371596-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-371596-m02' is duplicated with machine name 'multinode-371596-m02' in profile 'multinode-371596'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-371596-m03 --driver=docker  --container-runtime=crio
E0917 01:11:58.391128  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-371596-m03 --driver=docker  --container-runtime=crio: (31.851480196s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-371596
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-371596: exit status 80 (335.166741ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-371596 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-371596-m03 already exists in multinode-371596-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-371596-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-371596-m03: (1.927787591s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.26s)

                                                
                                    
x
+
TestPreload (141.2s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-793716 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-793716 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m4.949291733s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-793716 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-793716 image pull gcr.io/k8s-minikube/busybox: (3.725385082s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-793716
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-793716: (5.781717283s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-793716 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0917 01:13:56.176418  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-793716 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m4.134545261s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-793716 image list
helpers_test.go:175: Cleaning up "test-preload-793716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-793716
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-793716: (2.362168006s)
--- PASS: TestPreload (141.20s)

                                                
                                    
x
+
TestScheduledStopUnix (106.65s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-243296 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-243296 --memory=3072 --driver=docker  --container-runtime=crio: (30.526001412s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-243296 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-243296 -n scheduled-stop-243296
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-243296 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0917 01:15:06.666969  859053 retry.go:31] will retry after 102.839µs: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.668099  859053 retry.go:31] will retry after 216.902µs: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.669193  859053 retry.go:31] will retry after 320.422µs: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.672637  859053 retry.go:31] will retry after 437.588µs: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.673764  859053 retry.go:31] will retry after 443.675µs: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.674875  859053 retry.go:31] will retry after 728.575µs: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.675964  859053 retry.go:31] will retry after 1.057066ms: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.677087  859053 retry.go:31] will retry after 1.267094ms: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.679283  859053 retry.go:31] will retry after 3.270384ms: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.683476  859053 retry.go:31] will retry after 3.847545ms: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.687735  859053 retry.go:31] will retry after 3.568449ms: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.691933  859053 retry.go:31] will retry after 8.700764ms: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.701093  859053 retry.go:31] will retry after 19.172442ms: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.721581  859053 retry.go:31] will retry after 26.185173ms: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
I0917 01:15:06.748407  859053 retry.go:31] will retry after 36.30172ms: open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/scheduled-stop-243296/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-243296 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-243296 -n scheduled-stop-243296
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-243296
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-243296 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-243296
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-243296: exit status 7 (67.798749ms)

                                                
                                                
-- stdout --
	scheduled-stop-243296
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-243296 -n scheduled-stop-243296
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-243296 -n scheduled-stop-243296: exit status 7 (67.212821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-243296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-243296
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-243296: (4.558620255s)
--- PASS: TestScheduledStopUnix (106.65s)

                                                
                                    
x
+
TestInsufficientStorage (10.88s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-151356 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-151356 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.394194985s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5fcedb48-38a7-4576-923a-10f2091e3c7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-151356] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae6a361a-5066-43cc-9368-df804ae51c1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21550"}}
	{"specversion":"1.0","id":"18583091-0eee-488a-aaed-068f9b7c218b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"753c2c3c-c2c5-4d4b-9953-e917edf672f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig"}}
	{"specversion":"1.0","id":"a1c79857-da06-49bb-a9e2-816ce7d1505a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube"}}
	{"specversion":"1.0","id":"46e66d1f-a1a2-41d8-8699-d08f91577707","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"411d7aac-4d32-4cf9-bfb2-5f7c11b114ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1b0c1257-6d55-4d03-beee-fa8e32c77575","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1374fae6-cda3-4f85-83f1-5e5c9ea6778c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"19c70f63-1c38-441e-b7ab-0756d5e91fff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2376bdb3-deeb-4a7a-8531-139989b0a47d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d436b3d7-df6c-4cc7-a834-fd4215abc03a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-151356\" primary control-plane node in \"insufficient-storage-151356\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"02dd7763-e812-425d-a3f5-e36e1bd0eab6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"66636c31-e94f-4fd7-a995-51ea92cfc314","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1dcdc22-4c3f-46a5-b6b0-61ab9fbce2e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-151356 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-151356 --output=json --layout=cluster: exit status 7 (287.212272ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-151356","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-151356","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:16:30.918813  999656 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-151356" does not appear in /home/jenkins/minikube-integration/21550-857204/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-151356 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-151356 --output=json --layout=cluster: exit status 7 (296.279877ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-151356","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-151356","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 01:16:31.218051  999718 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-151356" does not appear in /home/jenkins/minikube-integration/21550-857204/kubeconfig
	E0917 01:16:31.228590  999718 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/insufficient-storage-151356/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-151356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-151356
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-151356: (1.900524645s)
--- PASS: TestInsufficientStorage (10.88s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (55.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.541833430 start -p running-upgrade-064658 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.541833430 start -p running-upgrade-064658 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.964742135s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-064658 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-064658 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.175086046s)
helpers_test.go:175: Cleaning up "running-upgrade-064658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-064658
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-064658: (2.059989368s)
--- PASS: TestRunningBinaryUpgrade (55.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (342.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-938668 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-938668 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.790475989s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-938668
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-938668: (1.325976483s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-938668 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-938668 status --format={{.Host}}: exit status 7 (135.036464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-938668 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-938668 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m36.945739082s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-938668 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-938668 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-938668 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (156.957622ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-938668] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-938668
	    minikube start -p kubernetes-upgrade-938668 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9386682 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-938668 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-938668 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-938668 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.391980137s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-938668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-938668
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-938668: (2.256990487s)
--- PASS: TestKubernetesUpgrade (342.20s)

                                                
                                    
x
+
TestMissingContainerUpgrade (122.8s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3310182256 start -p missing-upgrade-240825 --memory=3072 --driver=docker  --container-runtime=crio
E0917 01:16:58.391040  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3310182256 start -p missing-upgrade-240825 --memory=3072 --driver=docker  --container-runtime=crio: (1m5.223265472s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-240825
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-240825
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-240825 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-240825 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.753075645s)
helpers_test.go:175: Cleaning up "missing-upgrade-240825" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-240825
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-240825: (2.10581666s)
--- PASS: TestMissingContainerUpgrade (122.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-528665 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-528665 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (106.911414ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-528665] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-528665 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-528665 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.647732308s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-528665 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-528665 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-528665 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.96975955s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-528665 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-528665 status -o json: exit status 2 (401.996057ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-528665","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-528665
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-528665: (2.076272029s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-528665 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-528665 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.685112156s)
--- PASS: TestNoKubernetes/serial/Start (9.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-528665 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-528665 "sudo systemctl is-active --quiet service kubelet": exit status 1 (386.738652ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-528665
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-528665: (1.266000018s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-528665 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-528665 --driver=docker  --container-runtime=crio: (7.304958721s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-528665 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-528665 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.22126ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (63.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3329341620 start -p stopped-upgrade-251956 --memory=3072 --vm-driver=docker  --container-runtime=crio
E0917 01:18:56.176388  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3329341620 start -p stopped-upgrade-251956 --memory=3072 --vm-driver=docker  --container-runtime=crio: (41.635730421s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3329341620 -p stopped-upgrade-251956 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3329341620 -p stopped-upgrade-251956 stop: (1.258785794s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-251956 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-251956 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.900541333s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (63.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-251956
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-251956: (1.305912017s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                    
x
+
TestPause/serial/Start (85.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-213943 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E0917 01:21:58.391141  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-213943 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m25.218946021s)
--- PASS: TestPause/serial/Start (85.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (25.09s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-213943 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-213943 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.060915453s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (25.09s)

                                                
                                    
x
+
TestPause/serial/Pause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-213943 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.82s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-213943 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-213943 --output=json --layout=cluster: exit status 2 (324.601753ms)

                                                
                                                
-- stdout --
	{"Name":"pause-213943","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-213943","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-213943 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.27s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-213943 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-213943 --alsologtostderr -v=5: (1.267991239s)
--- PASS: TestPause/serial/PauseAgain (1.27s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.79s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-213943 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-213943 --alsologtostderr -v=5: (2.794061315s)
--- PASS: TestPause/serial/DeletePaused (2.79s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-213943
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-213943: exit status 1 (27.734523ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-213943: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-694260 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-694260 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (307.122429ms)

                                                
                                                
-- stdout --
	* [false-694260] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21550
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 01:23:24.976398 1038363 out.go:360] Setting OutFile to fd 1 ...
	I0917 01:23:24.976668 1038363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:23:24.976678 1038363 out.go:374] Setting ErrFile to fd 2...
	I0917 01:23:24.976683 1038363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0917 01:23:24.977044 1038363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21550-857204/.minikube/bin
	I0917 01:23:24.977668 1038363 out.go:368] Setting JSON to false
	I0917 01:23:24.979015 1038363 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14743,"bootTime":1758057462,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0917 01:23:24.979111 1038363 start.go:140] virtualization:  
	I0917 01:23:24.983983 1038363 out.go:179] * [false-694260] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0917 01:23:24.987163 1038363 out.go:179]   - MINIKUBE_LOCATION=21550
	I0917 01:23:24.987169 1038363 notify.go:220] Checking for updates...
	I0917 01:23:24.990745 1038363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 01:23:24.993716 1038363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21550-857204/kubeconfig
	I0917 01:23:24.996610 1038363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21550-857204/.minikube
	I0917 01:23:24.999635 1038363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0917 01:23:25.002831 1038363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 01:23:25.006360 1038363 config.go:182] Loaded profile config "kubernetes-upgrade-938668": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0917 01:23:25.006484 1038363 driver.go:421] Setting default libvirt URI to qemu:///system
	I0917 01:23:25.056105 1038363 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0917 01:23:25.056316 1038363 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 01:23:25.151163 1038363 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-17 01:23:25.140819316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0917 01:23:25.151284 1038363 docker.go:318] overlay module found
	I0917 01:23:25.154345 1038363 out.go:179] * Using the docker driver based on user configuration
	I0917 01:23:25.157153 1038363 start.go:304] selected driver: docker
	I0917 01:23:25.157169 1038363 start.go:918] validating driver "docker" against <nil>
	I0917 01:23:25.157185 1038363 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 01:23:25.160676 1038363 out.go:203] 
	W0917 01:23:25.163799 1038363 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0917 01:23:25.166637 1038363 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-694260 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-694260

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-694260

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-694260

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-694260

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-694260

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-694260

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-694260

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-694260

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-694260

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-694260

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-694260

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-694260" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-694260" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-857204/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:23:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-938668
contexts:
- context:
cluster: kubernetes-upgrade-938668
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:23:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-938668
name: kubernetes-upgrade-938668
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-938668
user:
client-certificate: /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/kubernetes-upgrade-938668/client.crt
client-key: /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/kubernetes-upgrade-938668/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-694260

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694260"

                                                
                                                
----------------------- debugLogs end: false-694260 [took: 4.451381774s] --------------------------------
helpers_test.go:175: Cleaning up "false-694260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-694260
--- PASS: TestNetworkPlugins/group/false (4.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (59.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-384921 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E0917 01:25:01.458182  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-384921 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (59.157621635s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (59.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-384921 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [92ac1614-1b3f-42a4-b2bd-14fc85ea94b3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [92ac1614-1b3f-42a4-b2bd-14fc85ea94b3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003296781s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-384921 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-384921 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-384921 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.051264068s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-384921 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-384921 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-384921 --alsologtostderr -v=3: (12.213617512s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-384921 -n old-k8s-version-384921
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-384921 -n old-k8s-version-384921: exit status 7 (72.2096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-384921 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-384921 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E0917 01:26:58.390668  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/functional-619464/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-384921 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.290308289s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-384921 -n old-k8s-version-384921
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8jgm9" [95fcb0e1-c513-4fd7-85a4-189bf9936bfc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003803644s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8jgm9" [95fcb0e1-c513-4fd7-85a4-189bf9936bfc] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004458524s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-384921 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-384921 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-384921 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-384921 -n old-k8s-version-384921
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-384921 -n old-k8s-version-384921: exit status 2 (322.625281ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-384921 -n old-k8s-version-384921
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-384921 -n old-k8s-version-384921: exit status 2 (316.365748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-384921 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-384921 -n old-k8s-version-384921
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-384921 -n old-k8s-version-384921
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-235708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-235708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m15.331202497s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-734156 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-734156 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m24.246816863s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-235708 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [22c0a4a3-8806-4956-b602-dacf49a32392] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [22c0a4a3-8806-4956-b602-dacf49a32392] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.02220807s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-235708 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-235708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-235708 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.047251444s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-235708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-235708 --alsologtostderr -v=3
E0917 01:28:56.176483  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/addons-160127/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-235708 --alsologtostderr -v=3: (11.961353486s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-235708 -n no-preload-235708
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-235708 -n no-preload-235708: exit status 7 (88.297114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-235708 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (57.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-235708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-235708 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (57.567831321s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-235708 -n no-preload-235708
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (57.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-734156 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0acdafe3-fc55-4709-b843-3d9bfe44eefc] Pending
helpers_test.go:352: "busybox" [0acdafe3-fc55-4709-b843-3d9bfe44eefc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.00393747s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-734156 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-734156 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-734156 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.475369981s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-734156 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-734156 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-734156 --alsologtostderr -v=3: (12.658992174s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-734156 -n embed-certs-734156
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-734156 -n embed-certs-734156: exit status 7 (67.590376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-734156 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-734156 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-734156 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (55.339263739s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-734156 -n embed-certs-734156
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bq98j" [c70ce3c0-d1e9-40b7-9dfa-d9530c39aba9] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003876963s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-bq98j" [c70ce3c0-d1e9-40b7-9dfa-d9530c39aba9] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00409672s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-235708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-235708 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-235708 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-235708 -n no-preload-235708
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-235708 -n no-preload-235708: exit status 2 (333.46385ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-235708 -n no-preload-235708
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-235708 -n no-preload-235708: exit status 2 (331.741853ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-235708 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-235708 -n no-preload-235708
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-235708 -n no-preload-235708
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-086933 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-086933 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m18.943655145s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8qbpd" [c833b2a6-a05f-4e16-9e18-34fb0a47331b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003562495s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8qbpd" [c833b2a6-a05f-4e16-9e18-34fb0a47331b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00393444s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-734156 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-734156 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-734156 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-734156 -n embed-certs-734156
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-734156 -n embed-certs-734156: exit status 2 (351.37333ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-734156 -n embed-certs-734156
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-734156 -n embed-certs-734156: exit status 2 (428.129309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-734156 --alsologtostderr -v=1
E0917 01:30:48.158052  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:30:48.164354  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:30:48.175715  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:30:48.197523  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:30:48.239297  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:30:48.321113  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:30:48.482644  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:30:48.804468  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-734156 -n embed-certs-734156
E0917 01:30:49.446838  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-734156 -n embed-certs-734156
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-960511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0917 01:30:58.412243  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:31:08.654610  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:31:29.136382  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-960511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (36.957905631s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-960511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-960511 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.367630084s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-960511 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-960511 --alsologtostderr -v=3: (1.220589177s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-960511 -n newest-cni-960511
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-960511 -n newest-cni-960511: exit status 7 (83.496527ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-960511 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-960511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-960511 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (19.900832845s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-960511 -n newest-cni-960511
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-086933 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [988ae90d-4a24-48d2-b517-b9d04b64f1f1] Pending
helpers_test.go:352: "busybox" [988ae90d-4a24-48d2-b517-b9d04b64f1f1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [988ae90d-4a24-48d2-b517-b9d04b64f1f1] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004619634s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-086933 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-086933 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-086933 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.46744131s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-086933 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-086933 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-086933 --alsologtostderr -v=3: (12.200013784s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-960511 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-960511 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-960511 -n newest-cni-960511
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-960511 -n newest-cni-960511: exit status 2 (300.963483ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-960511 -n newest-cni-960511
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-960511 -n newest-cni-960511: exit status 2 (311.817585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-960511 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-960511 -n newest-cni-960511
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-960511 -n newest-cni-960511
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.722760724s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-086933 -n default-k8s-diff-port-086933
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-086933 -n default-k8s-diff-port-086933: exit status 7 (88.292443ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-086933 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-086933 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0917 01:32:10.098599  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-086933 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m4.967369201s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-086933 -n default-k8s-diff-port-086933
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (65.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qd8hc" [b8f8f0a2-8c6a-41bc-b2a8-d5955816bf97] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003713388s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qd8hc" [b8f8f0a2-8c6a-41bc-b2a8-d5955816bf97] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003039858s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-086933 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-086933 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-086933 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-086933 -n default-k8s-diff-port-086933
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-086933 -n default-k8s-diff-port-086933: exit status 2 (322.888474ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-086933 -n default-k8s-diff-port-086933
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-086933 -n default-k8s-diff-port-086933: exit status 2 (326.876329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-086933 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-086933 -n default-k8s-diff-port-086933
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-086933 -n default-k8s-diff-port-086933
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.13s)
E0917 01:39:02.720999  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:39:06.277485  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m25.570067571s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-694260 "pgrep -a kubelet"
I0917 01:33:25.008270  859053 config.go:182] Loaded profile config "auto-694260": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-694260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7wqcq" [846b9883-3e25-4904-a8fc-352b4ef9c01a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7wqcq" [846b9883-3e25-4904-a8fc-352b4ef9c01a] Running
E0917 01:33:32.020918  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:33:35.013878  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:33:35.020266  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:33:35.031640  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:33:35.053002  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:33:35.094659  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:33:35.176027  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:33:35.337324  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:33:35.658916  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:33:36.300836  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005102633s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-694260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (62.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0917 01:34:15.990832  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.118741393s)
--- PASS: TestNetworkPlugins/group/calico/Start (62.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-vlnmf" [97474756-6fe7-4fcf-8bbf-c9f3a578762b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004540309s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-694260 "pgrep -a kubelet"
I0917 01:34:55.504545  859053 config.go:182] Loaded profile config "kindnet-694260": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-694260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2lgss" [e3e6a170-7fe4-45b9-ae79-dfa3b490c3d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 01:34:56.957089  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-2lgss" [e3e6a170-7fe4-45b9-ae79-dfa3b490c3d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003132022s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-xln92" [9c3612a5-95af-4fff-8bdf-18c11fc95dc0] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-xln92" [9c3612a5-95af-4fff-8bdf-18c11fc95dc0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003836539s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-694260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-694260 "pgrep -a kubelet"
I0917 01:35:10.393815  859053 config.go:182] Loaded profile config "calico-694260": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-694260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-w2gfg" [2158715b-e10f-4510-b22e-ae0f1f7a5781] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-w2gfg" [2158715b-e10f-4510-b22e-ae0f1f7a5781] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.00368864s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-694260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m7.779631124s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (46.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0917 01:36:15.862922  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/old-k8s-version-384921/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:36:18.878849  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:36:34.192057  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:36:34.198431  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:36:34.209905  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:36:34.231295  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:36:34.273513  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:36:34.355683  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:36:34.517834  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:36:34.839493  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:36:35.481094  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (46.211130329s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (46.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-694260 "pgrep -a kubelet"
I0917 01:36:35.899946  859053 config.go:182] Loaded profile config "enable-default-cni-694260": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-694260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2bnfz" [4665cd35-33e1-41d0-b001-3da0adf70e95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 01:36:36.762717  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-2bnfz" [4665cd35-33e1-41d0-b001-3da0adf70e95] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003988904s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-694260 "pgrep -a kubelet"
I0917 01:36:37.630325  859053 config.go:182] Loaded profile config "custom-flannel-694260": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-694260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l9k7g" [75b9525e-6cd2-43b8-a70b-3f66372728e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 01:36:39.324871  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-l9k7g" [75b9525e-6cd2-43b8-a70b-3f66372728e2] Running
E0917 01:36:44.446203  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004071583s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-694260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-694260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0917 01:37:15.169497  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m7.627014271s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0917 01:37:56.131575  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/default-k8s-diff-port-086933/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-694260 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m28.022289195s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-gqhbj" [f209f9c0-5975-44d5-922a-df9fd5d828ea] Running
E0917 01:38:25.302523  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:38:25.308995  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:38:25.320401  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:38:25.341820  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:38:25.383337  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:38:25.464787  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:38:25.626145  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:38:25.948229  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:38:26.589580  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004373247s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-694260 "pgrep -a kubelet"
I0917 01:38:26.976960  859053 config.go:182] Loaded profile config "flannel-694260": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-694260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b6wmk" [46a3b3b8-57f8-41b1-ba18-92fac24f2b6b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 01:38:27.870914  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:38:30.432624  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-b6wmk" [46a3b3b8-57f8-41b1-ba18-92fac24f2b6b] Running
E0917 01:38:35.013458  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/no-preload-235708/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0917 01:38:35.553989  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.002492362s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-694260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-694260 "pgrep -a kubelet"
I0917 01:38:44.113013  859053 config.go:182] Loaded profile config "bridge-694260": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-694260 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gtmmt" [40e8d0fc-efe6-4d21-815d-40c4377e6720] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 01:38:45.795407  859053 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/auto-694260/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-gtmmt" [40e8d0fc-efe6-4d21-815d-40c4377e6720] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005206569s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-694260 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-694260 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    

Test skip (32/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.6s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-206051 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-206051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-206051
--- SKIP: TestDownloadOnlyKic (0.60s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.36s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-160127 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.36s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-956488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-956488
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-694260 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-694260

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-694260

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-694260

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-694260

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-694260

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-694260

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-694260

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-694260

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-694260

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-694260

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-694260

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-694260" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-694260" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21550-857204/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:23:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-938668
contexts:
- context:
cluster: kubernetes-upgrade-938668
extensions:
- extension:
last-update: Wed, 17 Sep 2025 01:23:15 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-938668
name: kubernetes-upgrade-938668
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-938668
user:
client-certificate: /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/kubernetes-upgrade-938668/client.crt
client-key: /home/jenkins/minikube-integration/21550-857204/.minikube/profiles/kubernetes-upgrade-938668/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-694260

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694260"

                                                
                                                
----------------------- debugLogs end: kubenet-694260 [took: 3.685314595s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-694260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-694260
--- SKIP: TestNetworkPlugins/group/kubenet (3.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-694260 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-694260" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-694260

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-694260" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694260"

                                                
                                                
----------------------- debugLogs end: cilium-694260 [took: 5.111693571s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-694260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-694260
--- SKIP: TestNetworkPlugins/group/cilium (5.32s)

                                                
                                    
Copied to clipboard