Test Report: Docker_Linux_crio_arm64 21341

                    
                      890003c5847d742050af13aa4e3a32f9efad98ac:2025-09-04:41269
                    
                

Test fail (6/332)

Order failed test Duration
37 TestAddons/parallel/Ingress 155.84
98 TestFunctional/parallel/ServiceCmdConnect 603.77
126 TestFunctional/parallel/ServiceCmd/DeployApp 601.33
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
136 TestFunctional/parallel/ServiceCmd/Format 0.56
137 TestFunctional/parallel/ServiceCmd/URL 0.52
x
+
TestAddons/parallel/Ingress (155.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-250903 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-250903 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-250903 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [143a8430-ffb2-4486-b033-3dd593bcaddf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [143a8430-ffb2-4486-b033-3dd593bcaddf] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003857955s
I0903 23:09:03.571064  297789 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-250903 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.488380478s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-250903 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-250903
helpers_test.go:243: (dbg) docker inspect addons-250903:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08963ffff7f68776dd205a9988c5b154764c48b8e613645bb5c03470d15885fa",
	        "Created": "2025-09-03T23:05:30.711625775Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 298943,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-03T23:05:30.778822491Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ebcae716971f7c51ed3fd14f6fe4cc79c434c2b1abdabc67816f3649f4bf0002",
	        "ResolvConfPath": "/var/lib/docker/containers/08963ffff7f68776dd205a9988c5b154764c48b8e613645bb5c03470d15885fa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08963ffff7f68776dd205a9988c5b154764c48b8e613645bb5c03470d15885fa/hostname",
	        "HostsPath": "/var/lib/docker/containers/08963ffff7f68776dd205a9988c5b154764c48b8e613645bb5c03470d15885fa/hosts",
	        "LogPath": "/var/lib/docker/containers/08963ffff7f68776dd205a9988c5b154764c48b8e613645bb5c03470d15885fa/08963ffff7f68776dd205a9988c5b154764c48b8e613645bb5c03470d15885fa-json.log",
	        "Name": "/addons-250903",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-250903:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-250903",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "08963ffff7f68776dd205a9988c5b154764c48b8e613645bb5c03470d15885fa",
	                "LowerDir": "/var/lib/docker/overlay2/ff21801b691630a1d559e3e601d6d3e7fb5f248fa1013086c6bf7bcd07beae98-init/diff:/var/lib/docker/overlay2/cfed3f2232112709c4ba7d89bdbefe61b3142a45fe30ee6468d5e0113ef24166/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ff21801b691630a1d559e3e601d6d3e7fb5f248fa1013086c6bf7bcd07beae98/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ff21801b691630a1d559e3e601d6d3e7fb5f248fa1013086c6bf7bcd07beae98/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ff21801b691630a1d559e3e601d6d3e7fb5f248fa1013086c6bf7bcd07beae98/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-250903",
	                "Source": "/var/lib/docker/volumes/addons-250903/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-250903",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-250903",
	                "name.minikube.sigs.k8s.io": "addons-250903",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bbd7366c8cd50e972de9b2b263900b6afee177682973249fe5cb799935c6725d",
	            "SandboxKey": "/var/run/docker/netns/bbd7366c8cd5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-250903": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:a5:dd:5e:b2:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5838c3664348199073afd0afe02cd424ae31ce9d16e1c8ce688d578c117c1a9a",
	                    "EndpointID": "316d4790884fe0dfa1f0f07d1bb8446e97c4c74fec63c8abe1ae9ce02ee1dd0d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-250903",
	                        "08963ffff7f6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-250903 -n addons-250903
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-250903 logs -n 25: (1.726949406s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-456061                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-456061 │ jenkins │ v1.36.0 │ 03 Sep 25 23:05 UTC │ 03 Sep 25 23:05 UTC │
	│ start   │ --download-only -p binary-mirror-819907 --alsologtostderr --binary-mirror http://127.0.0.1:44565 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-819907   │ jenkins │ v1.36.0 │ 03 Sep 25 23:05 UTC │                     │
	│ delete  │ -p binary-mirror-819907                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-819907   │ jenkins │ v1.36.0 │ 03 Sep 25 23:05 UTC │ 03 Sep 25 23:05 UTC │
	│ addons  │ enable dashboard -p addons-250903                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:05 UTC │                     │
	│ addons  │ disable dashboard -p addons-250903                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:05 UTC │                     │
	│ start   │ -p addons-250903 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:05 UTC │ 03 Sep 25 23:08 UTC │
	│ addons  │ addons-250903 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:08 UTC │
	│ addons  │ addons-250903 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:08 UTC │
	│ addons  │ enable headlamp -p addons-250903 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:08 UTC │
	│ addons  │ addons-250903 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:08 UTC │
	│ ip      │ addons-250903 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:08 UTC │
	│ addons  │ addons-250903 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:08 UTC │
	│ addons  │ addons-250903 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:08 UTC │
	│ addons  │ addons-250903 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:08 UTC │ 03 Sep 25 23:08 UTC │
	│ ssh     │ addons-250903 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │                     │
	│ addons  │ addons-250903 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ addons  │ addons-250903 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-250903                                                                                                                                                                                                                                                                                                                                                                                           │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ addons  │ addons-250903 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ addons  │ addons-250903 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ addons  │ addons-250903 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ ssh     │ addons-250903 ssh cat /opt/local-path-provisioner/pvc-869b6e37-b5c3-43b8-a231-bd6f94d647a1_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ addons  │ addons-250903 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ addons  │ addons-250903 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:09 UTC │ 03 Sep 25 23:09 UTC │
	│ ip      │ addons-250903 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-250903          │ jenkins │ v1.36.0 │ 03 Sep 25 23:11 UTC │ 03 Sep 25 23:11 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:05:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:05:05.982049  298546 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:05:05.982184  298546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:05:05.982198  298546 out.go:374] Setting ErrFile to fd 2...
	I0903 23:05:05.982204  298546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:05:05.982464  298546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
	I0903 23:05:05.982916  298546 out.go:368] Setting JSON to false
	I0903 23:05:05.983784  298546 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6456,"bootTime":1756934250,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0903 23:05:05.983857  298546 start.go:140] virtualization:  
	I0903 23:05:05.987197  298546 out.go:179] * [addons-250903] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0903 23:05:05.991126  298546 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:05:05.991266  298546 notify.go:220] Checking for updates...
	I0903 23:05:05.997188  298546 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:05:06.006068  298546 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig
	I0903 23:05:06.009260  298546 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube
	I0903 23:05:06.012332  298546 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0903 23:05:06.015419  298546 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:05:06.018538  298546 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:05:06.046741  298546 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0903 23:05:06.046877  298546 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0903 23:05:06.101846  298546 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-03 23:05:06.092580815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0903 23:05:06.101955  298546 docker.go:318] overlay module found
	I0903 23:05:06.106895  298546 out.go:179] * Using the docker driver based on user configuration
	I0903 23:05:06.109835  298546 start.go:304] selected driver: docker
	I0903 23:05:06.109886  298546 start.go:918] validating driver "docker" against <nil>
	I0903 23:05:06.109916  298546 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:05:06.110685  298546 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0903 23:05:06.165069  298546 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-03 23:05:06.156278318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0903 23:05:06.165236  298546 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 23:05:06.165458  298546 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:05:06.168365  298546 out.go:179] * Using Docker driver with root privileges
	I0903 23:05:06.171319  298546 cni.go:84] Creating CNI manager for ""
	I0903 23:05:06.171398  298546 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0903 23:05:06.171411  298546 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0903 23:05:06.171482  298546 start.go:348] cluster config:
	{Name:addons-250903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-250903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0903 23:05:06.174652  298546 out.go:179] * Starting "addons-250903" primary control-plane node in "addons-250903" cluster
	I0903 23:05:06.177415  298546 cache.go:123] Beginning downloading kic base image for docker with crio
	I0903 23:05:06.180313  298546 out.go:179] * Pulling base image v0.0.47-1756116447-21413 ...
	I0903 23:05:06.183308  298546 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:05:06.183382  298546 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-295927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0903 23:05:06.183395  298546 cache.go:58] Caching tarball of preloaded images
	I0903 23:05:06.183394  298546 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon
	I0903 23:05:06.183478  298546 preload.go:172] Found /home/jenkins/minikube-integration/21341-295927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0903 23:05:06.183488  298546 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0903 23:05:06.183911  298546 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/config.json ...
	I0903 23:05:06.183959  298546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/config.json: {Name:mk48cf24e9209dc062d49c47e0bed3ff20bde07b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:06.198571  298546 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 to local cache
	I0903 23:05:06.198712  298546 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local cache directory
	I0903 23:05:06.198736  298546 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local cache directory, skipping pull
	I0903 23:05:06.198742  298546 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 exists in cache, skipping pull
	I0903 23:05:06.198753  298546 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 as a tarball
	I0903 23:05:06.198761  298546 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 from local cache
	I0903 23:05:24.067676  298546 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 from cached tarball
	I0903 23:05:24.067715  298546 cache.go:232] Successfully downloaded all kic artifacts
	I0903 23:05:24.067747  298546 start.go:360] acquireMachinesLock for addons-250903: {Name:mkc1cf53bc9ff330f73820cc3f9b6671eb5d98a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:05:24.067888  298546 start.go:364] duration metric: took 118.968µs to acquireMachinesLock for "addons-250903"
	I0903 23:05:24.067917  298546 start.go:93] Provisioning new machine with config: &{Name:addons-250903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-250903 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0903 23:05:24.068001  298546 start.go:125] createHost starting for "" (driver="docker")
	I0903 23:05:24.071490  298546 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0903 23:05:24.071765  298546 start.go:159] libmachine.API.Create for "addons-250903" (driver="docker")
	I0903 23:05:24.071806  298546 client.go:168] LocalClient.Create starting
	I0903 23:05:24.071934  298546 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca.pem
	I0903 23:05:24.678520  298546 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/cert.pem
	I0903 23:05:24.813744  298546 cli_runner.go:164] Run: docker network inspect addons-250903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0903 23:05:24.829635  298546 cli_runner.go:211] docker network inspect addons-250903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0903 23:05:24.829730  298546 network_create.go:284] running [docker network inspect addons-250903] to gather additional debugging logs...
	I0903 23:05:24.829753  298546 cli_runner.go:164] Run: docker network inspect addons-250903
	W0903 23:05:24.851200  298546 cli_runner.go:211] docker network inspect addons-250903 returned with exit code 1
	I0903 23:05:24.851239  298546 network_create.go:287] error running [docker network inspect addons-250903]: docker network inspect addons-250903: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-250903 not found
	I0903 23:05:24.851255  298546 network_create.go:289] output of [docker network inspect addons-250903]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-250903 not found
	
	** /stderr **
	I0903 23:05:24.851362  298546 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0903 23:05:24.870369  298546 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c5440}
	I0903 23:05:24.870411  298546 network_create.go:124] attempt to create docker network addons-250903 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0903 23:05:24.870467  298546 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-250903 addons-250903
	I0903 23:05:24.930133  298546 network_create.go:108] docker network addons-250903 192.168.49.0/24 created
	I0903 23:05:24.930168  298546 kic.go:121] calculated static IP "192.168.49.2" for the "addons-250903" container
	I0903 23:05:24.930262  298546 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0903 23:05:24.945951  298546 cli_runner.go:164] Run: docker volume create addons-250903 --label name.minikube.sigs.k8s.io=addons-250903 --label created_by.minikube.sigs.k8s.io=true
	I0903 23:05:24.963977  298546 oci.go:103] Successfully created a docker volume addons-250903
	I0903 23:05:24.964094  298546 cli_runner.go:164] Run: docker run --rm --name addons-250903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-250903 --entrypoint /usr/bin/test -v addons-250903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -d /var/lib
	I0903 23:05:26.454725  298546 cli_runner.go:217] Completed: docker run --rm --name addons-250903-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-250903 --entrypoint /usr/bin/test -v addons-250903:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -d /var/lib: (1.490573027s)
	I0903 23:05:26.454759  298546 oci.go:107] Successfully prepared a docker volume addons-250903
	I0903 23:05:26.454781  298546 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:05:26.454799  298546 kic.go:194] Starting extracting preloaded images to volume ...
	I0903 23:05:26.454864  298546 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21341-295927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-250903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir
	I0903 23:05:30.631479  298546 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21341-295927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-250903:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.176572441s)
	I0903 23:05:30.631514  298546 kic.go:203] duration metric: took 4.176710676s to extract preloaded images to volume ...
	W0903 23:05:30.631651  298546 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0903 23:05:30.631794  298546 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0903 23:05:30.696300  298546 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-250903 --name addons-250903 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-250903 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-250903 --network addons-250903 --ip 192.168.49.2 --volume addons-250903:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9
	I0903 23:05:31.027228  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Running}}
	I0903 23:05:31.045962  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:31.070402  298546 cli_runner.go:164] Run: docker exec addons-250903 stat /var/lib/dpkg/alternatives/iptables
	I0903 23:05:31.129263  298546 oci.go:144] the created container "addons-250903" has a running status.
	I0903 23:05:31.129299  298546 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa...
	I0903 23:05:31.662956  298546 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0903 23:05:31.688760  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:31.720555  298546 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0903 23:05:31.720576  298546 kic_runner.go:114] Args: [docker exec --privileged addons-250903 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0903 23:05:31.768452  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:31.788782  298546 machine.go:93] provisionDockerMachine start ...
	I0903 23:05:31.788881  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:31.810209  298546 main.go:141] libmachine: Using SSH client type: native
	I0903 23:05:31.810546  298546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e9ef0] 0x3ec6b0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0903 23:05:31.810563  298546 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:05:31.943567  298546 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-250903
	
	I0903 23:05:31.943593  298546 ubuntu.go:182] provisioning hostname "addons-250903"
	I0903 23:05:31.943711  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:31.964268  298546 main.go:141] libmachine: Using SSH client type: native
	I0903 23:05:31.964574  298546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e9ef0] 0x3ec6b0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0903 23:05:31.964591  298546 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-250903 && echo "addons-250903" | sudo tee /etc/hostname
	I0903 23:05:32.108864  298546 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-250903
	
	I0903 23:05:32.109043  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:32.129650  298546 main.go:141] libmachine: Using SSH client type: native
	I0903 23:05:32.129979  298546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e9ef0] 0x3ec6b0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0903 23:05:32.130001  298546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-250903' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-250903/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-250903' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:05:32.255827  298546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:05:32.255919  298546 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21341-295927/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-295927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-295927/.minikube}
	I0903 23:05:32.255976  298546 ubuntu.go:190] setting up certificates
	I0903 23:05:32.256016  298546 provision.go:84] configureAuth start
	I0903 23:05:32.256136  298546 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-250903
	I0903 23:05:32.273948  298546 provision.go:143] copyHostCerts
	I0903 23:05:32.274050  298546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-295927/.minikube/ca.pem (1082 bytes)
	I0903 23:05:32.274183  298546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-295927/.minikube/cert.pem (1123 bytes)
	I0903 23:05:32.274236  298546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-295927/.minikube/key.pem (1675 bytes)
	I0903 23:05:32.274282  298546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-295927/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca-key.pem org=jenkins.addons-250903 san=[127.0.0.1 192.168.49.2 addons-250903 localhost minikube]
	I0903 23:05:33.627929  298546 provision.go:177] copyRemoteCerts
	I0903 23:05:33.627998  298546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:05:33.628038  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:33.645000  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:33.736614  298546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0903 23:05:33.761519  298546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0903 23:05:33.785831  298546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:05:33.810448  298546 provision.go:87] duration metric: took 1.554391833s to configureAuth
	I0903 23:05:33.810479  298546 ubuntu.go:206] setting minikube options for container-runtime
	I0903 23:05:33.810671  298546 config.go:182] Loaded profile config "addons-250903": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:05:33.810791  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:33.834531  298546 main.go:141] libmachine: Using SSH client type: native
	I0903 23:05:33.834840  298546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e9ef0] 0x3ec6b0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0903 23:05:33.834864  298546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:05:34.067209  298546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:05:34.067233  298546 machine.go:96] duration metric: took 2.27842539s to provisionDockerMachine
	I0903 23:05:34.067243  298546 client.go:171] duration metric: took 9.995430587s to LocalClient.Create
	I0903 23:05:34.067269  298546 start.go:167] duration metric: took 9.995505927s to libmachine.API.Create "addons-250903"
	I0903 23:05:34.067282  298546 start.go:293] postStartSetup for "addons-250903" (driver="docker")
	I0903 23:05:34.067293  298546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:05:34.067356  298546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:05:34.067405  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:34.084907  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:34.177250  298546 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:05:34.180421  298546 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0903 23:05:34.180459  298546 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0903 23:05:34.180471  298546 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0903 23:05:34.180478  298546 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0903 23:05:34.180489  298546 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-295927/.minikube/addons for local assets ...
	I0903 23:05:34.180561  298546 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-295927/.minikube/files for local assets ...
	I0903 23:05:34.180589  298546 start.go:296] duration metric: took 113.300465ms for postStartSetup
	I0903 23:05:34.180911  298546 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-250903
	I0903 23:05:34.198498  298546 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/config.json ...
	I0903 23:05:34.198820  298546 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0903 23:05:34.198919  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:34.216120  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:34.304643  298546 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0903 23:05:34.309270  298546 start.go:128] duration metric: took 10.241251167s to createHost
	I0903 23:05:34.309298  298546 start.go:83] releasing machines lock for "addons-250903", held for 10.241399863s
	I0903 23:05:34.309369  298546 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-250903
	I0903 23:05:34.327504  298546 ssh_runner.go:195] Run: cat /version.json
	I0903 23:05:34.327564  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:34.327833  298546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:05:34.327901  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:34.347285  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:34.347576  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:34.563964  298546 ssh_runner.go:195] Run: systemctl --version
	I0903 23:05:34.568322  298546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:05:34.708801  298546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0903 23:05:34.713206  298546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:05:34.740275  298546 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0903 23:05:34.740360  298546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:05:34.777340  298546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0903 23:05:34.777364  298546 start.go:495] detecting cgroup driver to use...
	I0903 23:05:34.777396  298546 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0903 23:05:34.777459  298546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:05:34.793316  298546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:05:34.805154  298546 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:05:34.805230  298546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:05:34.820056  298546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:05:34.835456  298546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:05:34.917868  298546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:05:35.022892  298546 docker.go:234] disabling docker service ...
	I0903 23:05:35.023040  298546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:05:35.044174  298546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:05:35.056844  298546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:05:35.144047  298546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:05:35.242771  298546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:05:35.254703  298546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:05:35.271599  298546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0903 23:05:35.271794  298546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:05:35.282570  298546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:05:35.282685  298546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:05:35.293644  298546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:05:35.304426  298546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:05:35.314308  298546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:05:35.323933  298546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:05:35.333799  298546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:05:35.350559  298546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:05:35.361174  298546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:05:35.369862  298546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:05:35.378579  298546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:05:35.462285  298546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:05:35.570944  298546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:05:35.571055  298546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:05:35.574729  298546 start.go:563] Will wait 60s for crictl version
	I0903 23:05:35.574814  298546 ssh_runner.go:195] Run: which crictl
	I0903 23:05:35.578332  298546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:05:35.618696  298546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0903 23:05:35.618864  298546 ssh_runner.go:195] Run: crio --version
	I0903 23:05:35.657860  298546 ssh_runner.go:195] Run: crio --version
	I0903 23:05:35.701566  298546 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0903 23:05:35.704522  298546 cli_runner.go:164] Run: docker network inspect addons-250903 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0903 23:05:35.720334  298546 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0903 23:05:35.723853  298546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:05:35.734732  298546 kubeadm.go:875] updating cluster {Name:addons-250903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-250903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:05:35.734853  298546 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:05:35.734911  298546 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:05:35.811643  298546 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:05:35.811713  298546 crio.go:433] Images already preloaded, skipping extraction
	I0903 23:05:35.811779  298546 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:05:35.852186  298546 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:05:35.852209  298546 cache_images.go:85] Images are preloaded, skipping loading
	I0903 23:05:35.852218  298546 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0903 23:05:35.852358  298546 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-250903 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-250903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:05:35.852447  298546 ssh_runner.go:195] Run: crio config
	I0903 23:05:35.904516  298546 cni.go:84] Creating CNI manager for ""
	I0903 23:05:35.904541  298546 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0903 23:05:35.904551  298546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:05:35.904595  298546 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-250903 NodeName:addons-250903 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 23:05:35.904751  298546 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-250903"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:05:35.904841  298546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:05:35.913502  298546 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:05:35.913572  298546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:05:35.922313  298546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0903 23:05:35.940782  298546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:05:35.959207  298546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0903 23:05:35.977318  298546 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0903 23:05:35.980784  298546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0903 23:05:35.991581  298546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:05:36.074675  298546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:05:36.088893  298546 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903 for IP: 192.168.49.2
	I0903 23:05:36.088970  298546 certs.go:194] generating shared ca certs ...
	I0903 23:05:36.089001  298546 certs.go:226] acquiring lock for ca certs: {Name:mk7e6b174a793881e5001fc4d8e7ec5b846a7bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:36.089190  298546 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-295927/.minikube/ca.key
	I0903 23:05:36.726066  298546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-295927/.minikube/ca.crt ...
	I0903 23:05:36.726100  298546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-295927/.minikube/ca.crt: {Name:mkf57f96ac79222f481b1b952352b2f070ac5606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:36.726303  298546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-295927/.minikube/ca.key ...
	I0903 23:05:36.726322  298546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-295927/.minikube/ca.key: {Name:mkcb6a322f96a2c80b8aa1817fb6d82d25735698 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:36.726414  298546 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-295927/.minikube/proxy-client-ca.key
	I0903 23:05:36.879967  298546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-295927/.minikube/proxy-client-ca.crt ...
	I0903 23:05:36.880001  298546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-295927/.minikube/proxy-client-ca.crt: {Name:mkda65351f66c6a834271716f4791de8b0537fdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:36.880906  298546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-295927/.minikube/proxy-client-ca.key ...
	I0903 23:05:36.880927  298546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-295927/.minikube/proxy-client-ca.key: {Name:mk16ccf9386f3f52f40f9182a1902b7f94019cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:36.881674  298546 certs.go:256] generating profile certs ...
	I0903 23:05:36.881767  298546 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.key
	I0903 23:05:36.881811  298546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt with IP's: []
	I0903 23:05:37.092373  298546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt ...
	I0903 23:05:37.092406  298546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: {Name:mk8fac307113ad2bddc9d332e5910aae776bca11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:37.093271  298546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.key ...
	I0903 23:05:37.093291  298546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.key: {Name:mk1580a4e3b08f29bb0ff42746a13f8ade6807b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:37.093393  298546 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/apiserver.key.87a59954
	I0903 23:05:37.093419  298546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/apiserver.crt.87a59954 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0903 23:05:37.613876  298546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/apiserver.crt.87a59954 ...
	I0903 23:05:37.613913  298546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/apiserver.crt.87a59954: {Name:mk7829bf7941627160ed4b8a7c1bb210e58542c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:37.614890  298546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/apiserver.key.87a59954 ...
	I0903 23:05:37.614914  298546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/apiserver.key.87a59954: {Name:mk6ab49889b849b34a0449169305b526f7abb942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:37.615010  298546 certs.go:381] copying /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/apiserver.crt.87a59954 -> /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/apiserver.crt
	I0903 23:05:37.615109  298546 certs.go:385] copying /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/apiserver.key.87a59954 -> /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/apiserver.key
	I0903 23:05:37.615168  298546 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/proxy-client.key
	I0903 23:05:37.615196  298546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/proxy-client.crt with IP's: []
	I0903 23:05:37.811055  298546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/proxy-client.crt ...
	I0903 23:05:37.811088  298546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/proxy-client.crt: {Name:mk92f9f88ceb7b214c602e6a1d825f2e8060c30e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:37.812010  298546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/proxy-client.key ...
	I0903 23:05:37.812032  298546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/proxy-client.key: {Name:mk6726bdb88e468a6dd3c2064a50c275d3bf3f63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:37.812949  298546 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:05:37.812994  298546 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca.pem (1082 bytes)
	I0903 23:05:37.813028  298546 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:05:37.813060  298546 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/key.pem (1675 bytes)
	I0903 23:05:37.813663  298546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:05:37.839886  298546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:05:37.865351  298546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:05:37.890922  298546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0903 23:05:37.915158  298546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0903 23:05:37.939629  298546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0903 23:05:37.963687  298546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:05:37.987988  298546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0903 23:05:38.014381  298546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:05:38.041656  298546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:05:38.061588  298546 ssh_runner.go:195] Run: openssl version
	I0903 23:05:38.067579  298546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:05:38.077774  298546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:05:38.081821  298546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:05:38.081943  298546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:05:38.089330  298546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:05:38.099418  298546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:05:38.102996  298546 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0903 23:05:38.103047  298546 kubeadm.go:392] StartCluster: {Name:addons-250903 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-250903 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:05:38.103122  298546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:05:38.103188  298546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:05:38.140603  298546 cri.go:89] found id: ""
	I0903 23:05:38.140721  298546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0903 23:05:38.150132  298546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0903 23:05:38.159334  298546 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0903 23:05:38.159407  298546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0903 23:05:38.168454  298546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0903 23:05:38.168472  298546 kubeadm.go:157] found existing configuration files:
	
	I0903 23:05:38.168528  298546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0903 23:05:38.177601  298546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0903 23:05:38.177731  298546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0903 23:05:38.186566  298546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0903 23:05:38.195622  298546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0903 23:05:38.195733  298546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0903 23:05:38.204463  298546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0903 23:05:38.214126  298546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0903 23:05:38.214204  298546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0903 23:05:38.222858  298546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0903 23:05:38.231870  298546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0903 23:05:38.231974  298546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0903 23:05:38.240910  298546 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0903 23:05:38.281021  298546 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0903 23:05:38.281192  298546 kubeadm.go:310] [preflight] Running pre-flight checks
	I0903 23:05:38.298186  298546 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0903 23:05:38.298315  298546 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0903 23:05:38.298379  298546 kubeadm.go:310] OS: Linux
	I0903 23:05:38.298453  298546 kubeadm.go:310] CGROUPS_CPU: enabled
	I0903 23:05:38.298522  298546 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0903 23:05:38.298615  298546 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0903 23:05:38.298687  298546 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0903 23:05:38.298753  298546 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0903 23:05:38.298830  298546 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0903 23:05:38.298913  298546 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0903 23:05:38.298989  298546 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0903 23:05:38.299073  298546 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0903 23:05:38.357934  298546 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0903 23:05:38.358105  298546 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0903 23:05:38.358208  298546 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0903 23:05:38.368218  298546 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0903 23:05:38.374651  298546 out.go:252]   - Generating certificates and keys ...
	I0903 23:05:38.374832  298546 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0903 23:05:38.374936  298546 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0903 23:05:38.439630  298546 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0903 23:05:38.760655  298546 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0903 23:05:38.879791  298546 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0903 23:05:39.230368  298546 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0903 23:05:39.736127  298546 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0903 23:05:39.736498  298546 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-250903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0903 23:05:40.366737  298546 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0903 23:05:40.367102  298546 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-250903 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0903 23:05:40.652145  298546 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0903 23:05:40.833474  298546 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0903 23:05:41.198893  298546 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0903 23:05:41.199184  298546 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0903 23:05:41.588778  298546 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0903 23:05:41.961141  298546 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0903 23:05:42.643115  298546 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0903 23:05:43.297648  298546 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0903 23:05:43.698343  298546 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0903 23:05:43.698998  298546 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0903 23:05:43.701702  298546 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0903 23:05:43.705046  298546 out.go:252]   - Booting up control plane ...
	I0903 23:05:43.705157  298546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0903 23:05:43.705234  298546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0903 23:05:43.705298  298546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0903 23:05:43.720943  298546 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0903 23:05:43.721054  298546 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0903 23:05:43.727611  298546 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0903 23:05:43.728071  298546 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0903 23:05:43.728388  298546 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0903 23:05:43.824976  298546 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0903 23:05:43.825110  298546 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0903 23:05:44.825417  298546 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.000779334s
	I0903 23:05:44.829077  298546 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0903 23:05:44.829182  298546 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0903 23:05:44.829282  298546 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0903 23:05:44.829367  298546 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0903 23:05:47.613644  298546 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.783877486s
	I0903 23:05:49.004399  298546 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.175301224s
	I0903 23:05:50.830665  298546 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.001315096s
	I0903 23:05:50.850893  298546 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0903 23:05:50.869871  298546 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0903 23:05:50.895346  298546 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0903 23:05:50.895557  298546 kubeadm.go:310] [mark-control-plane] Marking the node addons-250903 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0903 23:05:50.910950  298546 kubeadm.go:310] [bootstrap-token] Using token: 1t56xr.pelr8rpxxao9jgzg
	I0903 23:05:50.914005  298546 out.go:252]   - Configuring RBAC rules ...
	I0903 23:05:50.914126  298546 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0903 23:05:50.921756  298546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0903 23:05:50.931403  298546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0903 23:05:50.938292  298546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0903 23:05:50.944896  298546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0903 23:05:50.949952  298546 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0903 23:05:51.238805  298546 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0903 23:05:51.664871  298546 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0903 23:05:52.237616  298546 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0903 23:05:52.238808  298546 kubeadm.go:310] 
	I0903 23:05:52.238900  298546 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0903 23:05:52.238912  298546 kubeadm.go:310] 
	I0903 23:05:52.238991  298546 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0903 23:05:52.239000  298546 kubeadm.go:310] 
	I0903 23:05:52.239026  298546 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0903 23:05:52.239089  298546 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0903 23:05:52.239144  298546 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0903 23:05:52.239152  298546 kubeadm.go:310] 
	I0903 23:05:52.239207  298546 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0903 23:05:52.239215  298546 kubeadm.go:310] 
	I0903 23:05:52.239264  298546 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0903 23:05:52.239272  298546 kubeadm.go:310] 
	I0903 23:05:52.239325  298546 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0903 23:05:52.239405  298546 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0903 23:05:52.239477  298546 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0903 23:05:52.239486  298546 kubeadm.go:310] 
	I0903 23:05:52.239571  298546 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0903 23:05:52.239671  298546 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0903 23:05:52.239680  298546 kubeadm.go:310] 
	I0903 23:05:52.239765  298546 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1t56xr.pelr8rpxxao9jgzg \
	I0903 23:05:52.239873  298546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:14e5bcf8dd40ca5fbcdb20d30cd0179b6bd58082b34f5336cdf4ea8048277216 \
	I0903 23:05:52.239899  298546 kubeadm.go:310] 	--control-plane 
	I0903 23:05:52.239907  298546 kubeadm.go:310] 
	I0903 23:05:52.239993  298546 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0903 23:05:52.240001  298546 kubeadm.go:310] 
	I0903 23:05:52.240097  298546 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1t56xr.pelr8rpxxao9jgzg \
	I0903 23:05:52.240211  298546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:14e5bcf8dd40ca5fbcdb20d30cd0179b6bd58082b34f5336cdf4ea8048277216 
	I0903 23:05:52.244003  298546 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0903 23:05:52.244234  298546 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0903 23:05:52.244338  298546 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0903 23:05:52.244359  298546 cni.go:84] Creating CNI manager for ""
	I0903 23:05:52.244366  298546 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0903 23:05:52.247516  298546 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0903 23:05:52.250586  298546 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0903 23:05:52.254461  298546 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0903 23:05:52.254481  298546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0903 23:05:52.273624  298546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0903 23:05:52.565530  298546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0903 23:05:52.565628  298546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 23:05:52.565677  298546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-250903 minikube.k8s.io/updated_at=2025_09_03T23_05_52_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb minikube.k8s.io/name=addons-250903 minikube.k8s.io/primary=true
	I0903 23:05:52.739829  298546 ops.go:34] apiserver oom_adj: -16
	I0903 23:05:52.739935  298546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 23:05:53.240410  298546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 23:05:53.740032  298546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 23:05:54.240671  298546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 23:05:54.740145  298546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 23:05:55.240812  298546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 23:05:55.740004  298546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 23:05:56.240555  298546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 23:05:56.740577  298546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0903 23:05:56.859672  298546 kubeadm.go:1105] duration metric: took 4.294101824s to wait for elevateKubeSystemPrivileges
	I0903 23:05:56.859699  298546 kubeadm.go:394] duration metric: took 18.756656102s to StartCluster
	I0903 23:05:56.859717  298546 settings.go:142] acquiring lock: {Name:mk608318d98bf81e9dffbd03acd4a7d6ae6e8ff5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:56.860452  298546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21341-295927/kubeconfig
	I0903 23:05:56.861028  298546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21341-295927/kubeconfig: {Name:mk9657f07c514a05491c8f9fb0d3d2dcd2edd8bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:05:56.866826  298546 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0903 23:05:56.868361  298546 config.go:182] Loaded profile config "addons-250903": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:05:56.868412  298546 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0903 23:05:56.868522  298546 addons.go:69] Setting yakd=true in profile "addons-250903"
	I0903 23:05:56.868544  298546 addons.go:238] Setting addon yakd=true in "addons-250903"
	I0903 23:05:56.868569  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:56.869099  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.870593  298546 out.go:179] * Verifying Kubernetes components...
	I0903 23:05:56.871163  298546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0903 23:05:56.871444  298546 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-250903"
	I0903 23:05:56.871850  298546 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-250903"
	I0903 23:05:56.871915  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:56.873072  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.871627  298546 addons.go:69] Setting cloud-spanner=true in profile "addons-250903"
	I0903 23:05:56.873949  298546 addons.go:238] Setting addon cloud-spanner=true in "addons-250903"
	I0903 23:05:56.874021  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:56.871639  298546 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-250903"
	I0903 23:05:56.874113  298546 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-250903"
	I0903 23:05:56.874139  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:56.874587  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.874731  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.883603  298546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:05:56.871646  298546 addons.go:69] Setting default-storageclass=true in profile "addons-250903"
	I0903 23:05:56.883909  298546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-250903"
	I0903 23:05:56.871742  298546 addons.go:69] Setting gcp-auth=true in profile "addons-250903"
	I0903 23:05:56.871751  298546 addons.go:69] Setting ingress=true in profile "addons-250903"
	I0903 23:05:56.871757  298546 addons.go:69] Setting ingress-dns=true in profile "addons-250903"
	I0903 23:05:56.871762  298546 addons.go:69] Setting inspektor-gadget=true in profile "addons-250903"
	I0903 23:05:56.871768  298546 addons.go:69] Setting metrics-server=true in profile "addons-250903"
	I0903 23:05:56.871774  298546 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-250903"
	I0903 23:05:56.871780  298546 addons.go:69] Setting registry=true in profile "addons-250903"
	I0903 23:05:56.871786  298546 addons.go:69] Setting registry-creds=true in profile "addons-250903"
	I0903 23:05:56.871791  298546 addons.go:69] Setting storage-provisioner=true in profile "addons-250903"
	I0903 23:05:56.871797  298546 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-250903"
	I0903 23:05:56.871803  298546 addons.go:69] Setting volcano=true in profile "addons-250903"
	I0903 23:05:56.871808  298546 addons.go:69] Setting volumesnapshots=true in profile "addons-250903"
	I0903 23:05:56.884097  298546 addons.go:238] Setting addon volumesnapshots=true in "addons-250903"
	I0903 23:05:56.884132  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:56.884590  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.888080  298546 addons.go:238] Setting addon metrics-server=true in "addons-250903"
	I0903 23:05:56.888188  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:56.888822  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.898403  298546 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-250903"
	I0903 23:05:56.898514  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:56.899030  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.902952  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.907727  298546 mustload.go:65] Loading cluster: addons-250903
	I0903 23:05:56.907999  298546 config.go:182] Loaded profile config "addons-250903": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:05:56.908282  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.908902  298546 addons.go:238] Setting addon registry=true in "addons-250903"
	I0903 23:05:56.908984  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:56.909470  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.923819  298546 addons.go:238] Setting addon ingress=true in "addons-250903"
	I0903 23:05:56.923897  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:56.924402  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.931837  298546 addons.go:238] Setting addon registry-creds=true in "addons-250903"
	I0903 23:05:56.931947  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:56.932489  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.949222  298546 addons.go:238] Setting addon ingress-dns=true in "addons-250903"
	I0903 23:05:56.949297  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:56.949800  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.951158  298546 addons.go:238] Setting addon storage-provisioner=true in "addons-250903"
	I0903 23:05:56.951304  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:56.956277  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.972264  298546 addons.go:238] Setting addon inspektor-gadget=true in "addons-250903"
	I0903 23:05:56.972320  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:56.972814  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:56.989092  298546 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-250903"
	I0903 23:05:56.989463  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:57.019880  298546 addons.go:238] Setting addon volcano=true in "addons-250903"
	I0903 23:05:57.019954  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:57.020482  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:57.066796  298546 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0903 23:05:57.069567  298546 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0903 23:05:57.069605  298546 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0903 23:05:57.069687  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.115982  298546 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0903 23:05:57.116188  298546 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0903 23:05:57.116420  298546 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0903 23:05:57.127459  298546 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0903 23:05:57.127482  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0903 23:05:57.127551  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.127769  298546 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0903 23:05:57.128391  298546 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0903 23:05:57.128553  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0903 23:05:57.128528  298546 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0903 23:05:57.128536  298546 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0903 23:05:57.128861  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.139141  298546 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0903 23:05:57.139185  298546 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0903 23:05:57.139261  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.150852  298546 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0903 23:05:57.151005  298546 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0903 23:05:57.155789  298546 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0903 23:05:57.163919  298546 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0903 23:05:57.163944  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0903 23:05:57.164019  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.165723  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:57.176559  298546 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0903 23:05:57.185047  298546 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0903 23:05:57.185208  298546 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0903 23:05:57.193141  298546 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0903 23:05:57.193168  298546 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0903 23:05:57.193251  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.195587  298546 addons.go:238] Setting addon default-storageclass=true in "addons-250903"
	I0903 23:05:57.195627  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:57.200453  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:57.212270  298546 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0903 23:05:57.215194  298546 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0903 23:05:57.218491  298546 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0903 23:05:57.221300  298546 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0903 23:05:57.221328  298546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0903 23:05:57.221481  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.257643  298546 out.go:179]   - Using image docker.io/registry:3.0.0
	I0903 23:05:57.260541  298546 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0903 23:05:57.264574  298546 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0903 23:05:57.266746  298546 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0903 23:05:57.266775  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0903 23:05:57.266849  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.267583  298546 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0903 23:05:57.267628  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0903 23:05:57.267759  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.301757  298546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0903 23:05:57.302129  298546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:05:57.305310  298546 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0903 23:05:57.310295  298546 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0903 23:05:57.310383  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0903 23:05:57.310472  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.344503  298546 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0903 23:05:57.347441  298546 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0903 23:05:57.347475  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0903 23:05:57.347559  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.366352  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:57.386742  298546 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-250903"
	I0903 23:05:57.386783  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:05:57.387178  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:05:57.388122  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:57.400746  298546 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0903 23:05:57.404910  298546 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:05:57.404932  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0903 23:05:57.405003  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.422458  298546 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0903 23:05:57.428769  298546 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0903 23:05:57.428803  298546 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0903 23:05:57.428878  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.452423  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:57.469225  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	W0903 23:05:57.479364  298546 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0903 23:05:57.479586  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:57.501348  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:57.503399  298546 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0903 23:05:57.503415  298546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0903 23:05:57.503472  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.547607  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:57.551795  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:57.585249  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:57.586443  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:57.600065  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:57.619878  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	W0903 23:05:57.623880  298546 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0903 23:05:57.623914  298546 retry.go:31] will retry after 353.488498ms: ssh: handshake failed: EOF
	I0903 23:05:57.624117  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:57.627037  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:57.644785  298546 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0903 23:05:57.647699  298546 out.go:179]   - Using image docker.io/busybox:stable
	I0903 23:05:57.653725  298546 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0903 23:05:57.653754  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0903 23:05:57.653821  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:05:57.685871  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:05:57.876075  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0903 23:05:57.924487  298546 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0903 23:05:57.924553  298546 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0903 23:05:57.941892  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0903 23:05:57.979966  298546 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0903 23:05:57.980038  298546 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0903 23:05:58.019147  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0903 23:05:58.022565  298546 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0903 23:05:58.022633  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0903 23:05:58.052429  298546 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0903 23:05:58.052508  298546 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0903 23:05:58.086929  298546 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0903 23:05:58.086998  298546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0903 23:05:58.121435  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0903 23:05:58.140858  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0903 23:05:58.162611  298546 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0903 23:05:58.162637  298546 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0903 23:05:58.175825  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0903 23:05:58.183670  298546 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0903 23:05:58.183691  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0903 23:05:58.190293  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0903 23:05:58.220197  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0903 23:05:58.234132  298546 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0903 23:05:58.234155  298546 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0903 23:05:58.241982  298546 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0903 23:05:58.242016  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0903 23:05:58.278101  298546 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0903 23:05:58.278149  298546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0903 23:05:58.353488  298546 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0903 23:05:58.353519  298546 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0903 23:05:58.399342  298546 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0903 23:05:58.399381  298546 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0903 23:05:58.403358  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0903 23:05:58.406663  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 23:05:58.494339  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0903 23:05:58.500536  298546 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:05:58.500563  298546 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0903 23:05:58.503685  298546 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0903 23:05:58.503708  298546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0903 23:05:58.555120  298546 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0903 23:05:58.555159  298546 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0903 23:05:58.559628  298546 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0903 23:05:58.559686  298546 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0903 23:05:58.655429  298546 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0903 23:05:58.655470  298546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0903 23:05:58.674167  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0903 23:05:58.742620  298546 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0903 23:05:58.742652  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0903 23:05:58.812361  298546 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0903 23:05:58.812386  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0903 23:05:58.842544  298546 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0903 23:05:58.842574  298546 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0903 23:05:58.886108  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0903 23:05:58.971566  298546 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0903 23:05:58.971592  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0903 23:05:58.980036  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0903 23:05:59.062455  298546 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0903 23:05:59.062493  298546 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0903 23:05:59.244198  298546 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0903 23:05:59.244230  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0903 23:05:59.413989  298546 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0903 23:05:59.414067  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0903 23:05:59.593675  298546 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0903 23:05:59.593738  298546 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0903 23:05:59.752422  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0903 23:06:00.438904  298546 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.137080722s)
	I0903 23:06:00.438951  298546 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.136684301s)
	I0903 23:06:00.438940  298546 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0903 23:06:00.440887  298546 node_ready.go:35] waiting up to 6m0s for node "addons-250903" to be "Ready" ...
	I0903 23:06:01.364969  298546 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-250903" context rescaled to 1 replicas
	I0903 23:06:01.784989  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.908834811s)
	I0903 23:06:02.118744  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.17676857s)
	I0903 23:06:02.118839  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.099613899s)
	W0903 23:06:02.456318  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:03.104385  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.963481246s)
	I0903 23:06:03.104467  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.928615999s)
	I0903 23:06:03.104546  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.914230286s)
	I0903 23:06:03.104810  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.884581248s)
	I0903 23:06:03.104951  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.701563499s)
	I0903 23:06:03.104970  298546 addons.go:479] Verifying addon registry=true in "addons-250903"
	I0903 23:06:03.105041  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.983579185s)
	I0903 23:06:03.105327  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.698635249s)
	W0903 23:06:03.105351  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:03.105373  298546 retry.go:31] will retry after 220.649379ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:03.105433  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.611065104s)
	I0903 23:06:03.105056  298546 addons.go:479] Verifying addon ingress=true in "addons-250903"
	I0903 23:06:03.105593  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.43139757s)
	I0903 23:06:03.105659  298546 addons.go:479] Verifying addon metrics-server=true in "addons-250903"
	I0903 23:06:03.109669  298546 out.go:179] * Verifying registry addon...
	I0903 23:06:03.111537  298546 out.go:179] * Verifying ingress addon...
	I0903 23:06:03.114206  298546 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0903 23:06:03.117083  298546 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0903 23:06:03.127712  298546 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0903 23:06:03.127733  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:03.131876  298546 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0903 23:06:03.131900  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0903 23:06:03.152434  298546 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0903 23:06:03.171814  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.285661626s)
	W0903 23:06:03.171878  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0903 23:06:03.171901  298546 retry.go:31] will retry after 169.968769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0903 23:06:03.171975  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.191911358s)
	I0903 23:06:03.175197  298546 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-250903 service yakd-dashboard -n yakd-dashboard
	
	I0903 23:06:03.327064  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 23:06:03.342752  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0903 23:06:03.545745  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.793277559s)
	I0903 23:06:03.545853  298546 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-250903"
	I0903 23:06:03.549355  298546 out.go:179] * Verifying csi-hostpath-driver addon...
	I0903 23:06:03.553028  298546 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0903 23:06:03.584452  298546 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0903 23:06:03.584520  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:03.687343  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:03.688622  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:04.062847  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:04.127752  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:04.132431  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:04.557972  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:04.585908  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.258759631s)
	W0903 23:06:04.586095  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:04.586145  298546 retry.go:31] will retry after 238.989904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:04.586069  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.243198422s)
	I0903 23:06:04.658411  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:04.658652  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:04.825363  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0903 23:06:04.945039  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:05.057506  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:05.124361  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:05.131101  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:05.557519  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0903 23:06:05.651940  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:05.652023  298546 retry.go:31] will retry after 690.560995ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:05.658203  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:05.658350  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:06.057138  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:06.121382  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:06.122847  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:06.343039  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 23:06:06.415364  298546 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0903 23:06:06.415478  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:06:06.436210  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:06:06.558185  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:06.562119  298546 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0903 23:06:06.584569  298546 addons.go:238] Setting addon gcp-auth=true in "addons-250903"
	I0903 23:06:06.584668  298546 host.go:66] Checking if "addons-250903" exists ...
	I0903 23:06:06.585147  298546 cli_runner.go:164] Run: docker container inspect addons-250903 --format={{.State.Status}}
	I0903 23:06:06.618094  298546 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0903 23:06:06.618156  298546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-250903
	I0903 23:06:06.646093  298546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/addons-250903/id_rsa Username:docker}
	I0903 23:06:06.659061  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:06.659803  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:07.057196  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:07.118076  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:07.121039  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0903 23:06:07.268140  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:07.268174  298546 retry.go:31] will retry after 1.021318718s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:07.271713  298546 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0903 23:06:07.274833  298546 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0903 23:06:07.277688  298546 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0903 23:06:07.277717  298546 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0903 23:06:07.296299  298546 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0903 23:06:07.296370  298546 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0903 23:06:07.315739  298546 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0903 23:06:07.315763  298546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0903 23:06:07.334932  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	W0903 23:06:07.445434  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:07.556571  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:07.617584  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:07.620534  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:07.827514  298546 addons.go:479] Verifying addon gcp-auth=true in "addons-250903"
	I0903 23:06:07.830847  298546 out.go:179] * Verifying gcp-auth addon...
	I0903 23:06:07.834787  298546 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0903 23:06:07.839921  298546 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0903 23:06:07.839991  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:08.057666  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:08.118107  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:08.120843  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:08.290290  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 23:06:08.338738  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:08.556344  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:08.619288  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:08.622641  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:08.838952  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:09.056333  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0903 23:06:09.122612  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:09.122657  298546 retry.go:31] will retry after 1.07942072s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:09.127920  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:09.128050  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:09.338636  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:09.556582  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:09.617272  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:09.620214  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:09.838356  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:09.944067  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:10.056619  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:10.117651  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:10.125523  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:10.202578  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 23:06:10.338034  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:10.558558  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:10.658990  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:10.659802  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:10.838475  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:11.038441  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:11.038477  298546 retry.go:31] will retry after 2.446235563s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:11.059380  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:11.121951  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:11.123458  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:11.338393  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:11.556375  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:11.617293  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:11.620035  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:11.837926  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:12.057834  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:12.122336  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:12.122849  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:12.337802  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:12.444829  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:12.556998  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:12.618524  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:12.621230  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:12.838007  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:13.056521  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:13.118847  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:13.121630  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:13.338871  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:13.485061  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 23:06:13.557043  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:13.619045  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:13.621987  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:13.840192  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:14.057110  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:14.119974  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:14.129983  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0903 23:06:14.307575  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:14.307607  298546 retry.go:31] will retry after 2.602105684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:14.338760  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:14.556730  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:14.618091  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:14.620899  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:14.837715  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:14.944958  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:15.057188  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:15.117649  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:15.120690  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:15.337882  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:15.556181  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:15.618102  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:15.620592  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:15.838626  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:16.056684  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:16.127100  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:16.127403  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:16.338366  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:16.556780  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:16.617877  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:16.620530  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:16.838541  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:16.910693  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0903 23:06:16.945090  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:17.056707  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:17.124800  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:17.125307  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:17.338604  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:17.557266  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:17.617588  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:17.621275  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0903 23:06:17.694909  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:17.694945  298546 retry.go:31] will retry after 4.064456822s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:17.838098  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:18.057165  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:18.124998  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:18.126239  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:18.338183  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:18.557196  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:18.658492  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:18.658818  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:18.837491  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:19.056710  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:19.119579  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:19.121014  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:19.338439  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:19.444390  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:19.556108  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:19.617932  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:19.620090  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:19.838023  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:20.056722  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:20.117866  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:20.120621  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:20.338126  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:20.556943  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:20.618051  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:20.619967  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:20.838029  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:21.056184  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:21.118474  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:21.121004  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:21.338167  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:21.556729  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:21.617639  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:21.620320  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:21.760443  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 23:06:21.838725  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:21.944874  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:22.057507  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:22.117406  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:22.138057  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:22.338338  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:22.557769  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0903 23:06:22.573326  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:22.573369  298546 retry.go:31] will retry after 6.842720648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:22.617466  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:22.619935  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:22.837832  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:23.056386  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:23.117740  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:23.120760  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:23.338543  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:23.556372  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:23.617625  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:23.619925  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:23.838146  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:24.056785  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:24.119032  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:24.120459  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:24.338194  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:24.444117  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:24.556423  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:24.617574  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:24.619848  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:24.837849  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:25.056453  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:25.117893  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:25.124785  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:25.337719  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:25.556841  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:25.617779  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:25.619763  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:25.837917  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:26.057119  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:26.118379  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:26.124834  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:26.337850  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:26.444894  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:26.557047  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:26.619633  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:26.620119  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:26.837921  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:27.056500  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:27.120621  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:27.122328  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:27.338471  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:27.555944  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:27.617669  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:27.620350  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:27.838117  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:28.056455  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:28.118561  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:28.121455  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:28.338525  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:28.556886  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:28.617550  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:28.619951  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:28.837722  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:28.944898  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:29.056923  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:29.123684  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:29.126276  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:29.338492  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:29.416852  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 23:06:29.556358  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:29.617981  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:29.621405  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:29.838626  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:30.059954  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:30.124309  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:30.124608  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0903 23:06:30.293724  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:30.293758  298546 retry.go:31] will retry after 7.350752776s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:30.338385  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:30.556537  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:30.618107  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:30.620314  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:30.838294  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:31.056642  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:31.118267  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:31.125585  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:31.337709  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:31.444171  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:31.557490  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:31.617655  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:31.620011  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:31.838003  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:32.056720  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:32.121543  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:32.123540  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:32.338377  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:32.556277  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:32.620158  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:32.621143  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:32.838103  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:33.056052  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:33.120598  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:33.121006  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:33.337849  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:33.444814  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:33.556919  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:33.617703  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:33.619856  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:33.838143  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:34.056408  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:34.124297  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:34.126248  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:34.338301  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:34.556786  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:34.617837  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:34.620595  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:34.838571  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:35.055950  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:35.118130  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:35.120991  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:35.338162  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:35.556012  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:35.618255  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:35.620408  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:35.838649  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:35.944557  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:36.056937  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:36.119786  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:36.121571  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:36.338440  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:36.555918  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:36.617729  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:36.620509  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:36.838486  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:37.056424  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:37.118121  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:37.125285  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:37.339086  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:37.556159  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:37.617694  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:37.620430  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:37.644692  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 23:06:37.838461  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:37.944929  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:38.056648  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:38.117723  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:38.125760  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:38.339004  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:38.461215  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:38.461244  298546 retry.go:31] will retry after 21.567814658s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:06:38.556335  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:38.617257  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:38.620123  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:38.838135  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:39.055997  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:39.118360  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:39.121256  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:39.338479  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:39.555976  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:39.617777  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:39.620271  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:39.838729  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:40.064899  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:40.119558  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:40.121756  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:40.338047  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0903 23:06:40.444774  298546 node_ready.go:57] node "addons-250903" has "Ready":"False" status (will retry)
	I0903 23:06:40.556581  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:40.617630  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:40.620590  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:40.837770  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:41.056761  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:41.127276  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:41.127449  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:41.341936  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:41.511804  298546 node_ready.go:49] node "addons-250903" is "Ready"
	I0903 23:06:41.511834  298546 node_ready.go:38] duration metric: took 41.070701346s for node "addons-250903" to be "Ready" ...
	I0903 23:06:41.511849  298546 api_server.go:52] waiting for apiserver process to appear ...
	I0903 23:06:41.511916  298546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:06:41.549645  298546 api_server.go:72] duration metric: took 44.682738821s to wait for apiserver process to appear ...
	I0903 23:06:41.549713  298546 api_server.go:88] waiting for apiserver healthz status ...
	I0903 23:06:41.549748  298546 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0903 23:06:41.588497  298546 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0903 23:06:41.588563  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:41.589365  298546 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0903 23:06:41.592224  298546 api_server.go:141] control plane version: v1.34.0
	I0903 23:06:41.592290  298546 api_server.go:131] duration metric: took 42.556387ms to wait for apiserver health ...
	I0903 23:06:41.592325  298546 system_pods.go:43] waiting for kube-system pods to appear ...
	I0903 23:06:41.607769  298546 system_pods.go:59] 19 kube-system pods found
	I0903 23:06:41.610471  298546 system_pods.go:61] "coredns-66bc5c9577-2vcg7" [fe698436-ba8f-4799-8e14-8280e3fb336c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:06:41.610505  298546 system_pods.go:61] "csi-hostpath-attacher-0" [fa6816e6-f586-4d7a-9bbb-8054791d8ceb] Pending
	I0903 23:06:41.610522  298546 system_pods.go:61] "csi-hostpath-resizer-0" [3f3b308d-7a3c-49cd-8fe8-0eb7feadcba0] Pending
	I0903 23:06:41.610528  298546 system_pods.go:61] "csi-hostpathplugin-9knw6" [5440dc0c-60c2-4c03-8d40-2b333ca89855] Pending
	I0903 23:06:41.610534  298546 system_pods.go:61] "etcd-addons-250903" [ee80fbaf-09cf-4cd3-b35f-ff7c595da819] Running
	I0903 23:06:41.610540  298546 system_pods.go:61] "kindnet-5rbmb" [e40dca87-dbeb-4db1-bb70-163996f6e4bb] Running
	I0903 23:06:41.610546  298546 system_pods.go:61] "kube-apiserver-addons-250903" [b4d2da16-6233-4333-9e10-ab35b28e253e] Running
	I0903 23:06:41.610556  298546 system_pods.go:61] "kube-controller-manager-addons-250903" [5628c7ef-41a9-4a85-bd47-b6835959aeae] Running
	I0903 23:06:41.610566  298546 system_pods.go:61] "kube-ingress-dns-minikube" [fdd48c20-2391-4056-865e-fc63e06cf0eb] Pending
	I0903 23:06:41.610572  298546 system_pods.go:61] "kube-proxy-72qr6" [753d8202-3500-45b9-b1bb-72c1da1e96d5] Running
	I0903 23:06:41.610582  298546 system_pods.go:61] "kube-scheduler-addons-250903" [81747c4e-b953-4d0f-8377-105f9ac0210d] Running
	I0903 23:06:41.610590  298546 system_pods.go:61] "metrics-server-85b7d694d7-568v4" [412ee54d-941e-468e-870a-7a1c62c864e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:06:41.610603  298546 system_pods.go:61] "nvidia-device-plugin-daemonset-vzwcd" [413a92e7-6b19-427c-b018-fa6ead381c02] Pending
	I0903 23:06:41.610613  298546 system_pods.go:61] "registry-66898fdd98-qrfkz" [c35fb6f1-289a-424d-978e-8267be95ebd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0903 23:06:41.610623  298546 system_pods.go:61] "registry-creds-764b6fb674-f9vpw" [871e7382-5284-4275-9eaa-cd1c236e5fd8] Pending
	I0903 23:06:41.610629  298546 system_pods.go:61] "registry-proxy-xk2h9" [f969f46b-d8ef-4f8e-b119-afb8070f1e80] Pending
	I0903 23:06:41.610663  298546 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mdgkw" [5ee8ba09-4995-4f14-9dcf-fbbf30826c7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0903 23:06:41.610696  298546 system_pods.go:61] "snapshot-controller-7d9fbc56b8-tr4mf" [bf502b1b-5a75-43cf-a5f1-b96970808cb0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0903 23:06:41.610706  298546 system_pods.go:61] "storage-provisioner" [a0f2c0ae-ad55-4da5-9a0e-5afebffbf6c2] Pending
	I0903 23:06:41.610715  298546 system_pods.go:74] duration metric: took 18.370429ms to wait for pod list to return data ...
	I0903 23:06:41.610732  298546 default_sa.go:34] waiting for default service account to be created ...
	I0903 23:06:41.630385  298546 default_sa.go:45] found service account: "default"
	I0903 23:06:41.630470  298546 default_sa.go:55] duration metric: took 19.729764ms for default service account to be created ...
	I0903 23:06:41.630496  298546 system_pods.go:116] waiting for k8s-apps to be running ...
	I0903 23:06:41.634234  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:41.640203  298546 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0903 23:06:41.640238  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:41.641734  298546 system_pods.go:86] 19 kube-system pods found
	I0903 23:06:41.641780  298546 system_pods.go:89] "coredns-66bc5c9577-2vcg7" [fe698436-ba8f-4799-8e14-8280e3fb336c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:06:41.641788  298546 system_pods.go:89] "csi-hostpath-attacher-0" [fa6816e6-f586-4d7a-9bbb-8054791d8ceb] Pending
	I0903 23:06:41.641794  298546 system_pods.go:89] "csi-hostpath-resizer-0" [3f3b308d-7a3c-49cd-8fe8-0eb7feadcba0] Pending
	I0903 23:06:41.641798  298546 system_pods.go:89] "csi-hostpathplugin-9knw6" [5440dc0c-60c2-4c03-8d40-2b333ca89855] Pending
	I0903 23:06:41.641801  298546 system_pods.go:89] "etcd-addons-250903" [ee80fbaf-09cf-4cd3-b35f-ff7c595da819] Running
	I0903 23:06:41.641806  298546 system_pods.go:89] "kindnet-5rbmb" [e40dca87-dbeb-4db1-bb70-163996f6e4bb] Running
	I0903 23:06:41.641810  298546 system_pods.go:89] "kube-apiserver-addons-250903" [b4d2da16-6233-4333-9e10-ab35b28e253e] Running
	I0903 23:06:41.641814  298546 system_pods.go:89] "kube-controller-manager-addons-250903" [5628c7ef-41a9-4a85-bd47-b6835959aeae] Running
	I0903 23:06:41.641821  298546 system_pods.go:89] "kube-ingress-dns-minikube" [fdd48c20-2391-4056-865e-fc63e06cf0eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0903 23:06:41.641830  298546 system_pods.go:89] "kube-proxy-72qr6" [753d8202-3500-45b9-b1bb-72c1da1e96d5] Running
	I0903 23:06:41.641843  298546 system_pods.go:89] "kube-scheduler-addons-250903" [81747c4e-b953-4d0f-8377-105f9ac0210d] Running
	I0903 23:06:41.641855  298546 system_pods.go:89] "metrics-server-85b7d694d7-568v4" [412ee54d-941e-468e-870a-7a1c62c864e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:06:41.641859  298546 system_pods.go:89] "nvidia-device-plugin-daemonset-vzwcd" [413a92e7-6b19-427c-b018-fa6ead381c02] Pending
	I0903 23:06:41.641867  298546 system_pods.go:89] "registry-66898fdd98-qrfkz" [c35fb6f1-289a-424d-978e-8267be95ebd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0903 23:06:41.641876  298546 system_pods.go:89] "registry-creds-764b6fb674-f9vpw" [871e7382-5284-4275-9eaa-cd1c236e5fd8] Pending
	I0903 23:06:41.641881  298546 system_pods.go:89] "registry-proxy-xk2h9" [f969f46b-d8ef-4f8e-b119-afb8070f1e80] Pending
	I0903 23:06:41.641888  298546 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mdgkw" [5ee8ba09-4995-4f14-9dcf-fbbf30826c7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0903 23:06:41.641898  298546 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tr4mf" [bf502b1b-5a75-43cf-a5f1-b96970808cb0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0903 23:06:41.641903  298546 system_pods.go:89] "storage-provisioner" [a0f2c0ae-ad55-4da5-9a0e-5afebffbf6c2] Pending
	I0903 23:06:41.641925  298546 retry.go:31] will retry after 193.894319ms: missing components: kube-dns
	I0903 23:06:41.856250  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:41.857424  298546 system_pods.go:86] 19 kube-system pods found
	I0903 23:06:41.857464  298546 system_pods.go:89] "coredns-66bc5c9577-2vcg7" [fe698436-ba8f-4799-8e14-8280e3fb336c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:06:41.857473  298546 system_pods.go:89] "csi-hostpath-attacher-0" [fa6816e6-f586-4d7a-9bbb-8054791d8ceb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0903 23:06:41.857482  298546 system_pods.go:89] "csi-hostpath-resizer-0" [3f3b308d-7a3c-49cd-8fe8-0eb7feadcba0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0903 23:06:41.857490  298546 system_pods.go:89] "csi-hostpathplugin-9knw6" [5440dc0c-60c2-4c03-8d40-2b333ca89855] Pending
	I0903 23:06:41.857497  298546 system_pods.go:89] "etcd-addons-250903" [ee80fbaf-09cf-4cd3-b35f-ff7c595da819] Running
	I0903 23:06:41.857507  298546 system_pods.go:89] "kindnet-5rbmb" [e40dca87-dbeb-4db1-bb70-163996f6e4bb] Running
	I0903 23:06:41.857512  298546 system_pods.go:89] "kube-apiserver-addons-250903" [b4d2da16-6233-4333-9e10-ab35b28e253e] Running
	I0903 23:06:41.857517  298546 system_pods.go:89] "kube-controller-manager-addons-250903" [5628c7ef-41a9-4a85-bd47-b6835959aeae] Running
	I0903 23:06:41.857532  298546 system_pods.go:89] "kube-ingress-dns-minikube" [fdd48c20-2391-4056-865e-fc63e06cf0eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0903 23:06:41.857540  298546 system_pods.go:89] "kube-proxy-72qr6" [753d8202-3500-45b9-b1bb-72c1da1e96d5] Running
	I0903 23:06:41.857545  298546 system_pods.go:89] "kube-scheduler-addons-250903" [81747c4e-b953-4d0f-8377-105f9ac0210d] Running
	I0903 23:06:41.857551  298546 system_pods.go:89] "metrics-server-85b7d694d7-568v4" [412ee54d-941e-468e-870a-7a1c62c864e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:06:41.857559  298546 system_pods.go:89] "nvidia-device-plugin-daemonset-vzwcd" [413a92e7-6b19-427c-b018-fa6ead381c02] Pending
	I0903 23:06:41.857566  298546 system_pods.go:89] "registry-66898fdd98-qrfkz" [c35fb6f1-289a-424d-978e-8267be95ebd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0903 23:06:41.857572  298546 system_pods.go:89] "registry-creds-764b6fb674-f9vpw" [871e7382-5284-4275-9eaa-cd1c236e5fd8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0903 23:06:41.857579  298546 system_pods.go:89] "registry-proxy-xk2h9" [f969f46b-d8ef-4f8e-b119-afb8070f1e80] Pending
	I0903 23:06:41.857586  298546 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mdgkw" [5ee8ba09-4995-4f14-9dcf-fbbf30826c7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0903 23:06:41.857596  298546 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tr4mf" [bf502b1b-5a75-43cf-a5f1-b96970808cb0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0903 23:06:41.857608  298546 system_pods.go:89] "storage-provisioner" [a0f2c0ae-ad55-4da5-9a0e-5afebffbf6c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:06:41.857627  298546 retry.go:31] will retry after 353.311018ms: missing components: kube-dns
	I0903 23:06:42.068021  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:42.170986  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:42.171268  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:42.265070  298546 system_pods.go:86] 19 kube-system pods found
	I0903 23:06:42.265123  298546 system_pods.go:89] "coredns-66bc5c9577-2vcg7" [fe698436-ba8f-4799-8e14-8280e3fb336c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:06:42.265133  298546 system_pods.go:89] "csi-hostpath-attacher-0" [fa6816e6-f586-4d7a-9bbb-8054791d8ceb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0903 23:06:42.265142  298546 system_pods.go:89] "csi-hostpath-resizer-0" [3f3b308d-7a3c-49cd-8fe8-0eb7feadcba0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0903 23:06:42.265148  298546 system_pods.go:89] "csi-hostpathplugin-9knw6" [5440dc0c-60c2-4c03-8d40-2b333ca89855] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0903 23:06:42.265153  298546 system_pods.go:89] "etcd-addons-250903" [ee80fbaf-09cf-4cd3-b35f-ff7c595da819] Running
	I0903 23:06:42.265172  298546 system_pods.go:89] "kindnet-5rbmb" [e40dca87-dbeb-4db1-bb70-163996f6e4bb] Running
	I0903 23:06:42.265180  298546 system_pods.go:89] "kube-apiserver-addons-250903" [b4d2da16-6233-4333-9e10-ab35b28e253e] Running
	I0903 23:06:42.265184  298546 system_pods.go:89] "kube-controller-manager-addons-250903" [5628c7ef-41a9-4a85-bd47-b6835959aeae] Running
	I0903 23:06:42.265193  298546 system_pods.go:89] "kube-ingress-dns-minikube" [fdd48c20-2391-4056-865e-fc63e06cf0eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0903 23:06:42.265203  298546 system_pods.go:89] "kube-proxy-72qr6" [753d8202-3500-45b9-b1bb-72c1da1e96d5] Running
	I0903 23:06:42.265209  298546 system_pods.go:89] "kube-scheduler-addons-250903" [81747c4e-b953-4d0f-8377-105f9ac0210d] Running
	I0903 23:06:42.265216  298546 system_pods.go:89] "metrics-server-85b7d694d7-568v4" [412ee54d-941e-468e-870a-7a1c62c864e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:06:42.265228  298546 system_pods.go:89] "nvidia-device-plugin-daemonset-vzwcd" [413a92e7-6b19-427c-b018-fa6ead381c02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0903 23:06:42.265241  298546 system_pods.go:89] "registry-66898fdd98-qrfkz" [c35fb6f1-289a-424d-978e-8267be95ebd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0903 23:06:42.265251  298546 system_pods.go:89] "registry-creds-764b6fb674-f9vpw" [871e7382-5284-4275-9eaa-cd1c236e5fd8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0903 23:06:42.265257  298546 system_pods.go:89] "registry-proxy-xk2h9" [f969f46b-d8ef-4f8e-b119-afb8070f1e80] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0903 23:06:42.265264  298546 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mdgkw" [5ee8ba09-4995-4f14-9dcf-fbbf30826c7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0903 23:06:42.265273  298546 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tr4mf" [bf502b1b-5a75-43cf-a5f1-b96970808cb0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0903 23:06:42.265283  298546 system_pods.go:89] "storage-provisioner" [a0f2c0ae-ad55-4da5-9a0e-5afebffbf6c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:06:42.265307  298546 retry.go:31] will retry after 479.251261ms: missing components: kube-dns
	I0903 23:06:42.387012  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:42.557558  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:42.617582  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:42.619886  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:42.753400  298546 system_pods.go:86] 19 kube-system pods found
	I0903 23:06:42.753525  298546 system_pods.go:89] "coredns-66bc5c9577-2vcg7" [fe698436-ba8f-4799-8e14-8280e3fb336c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0903 23:06:42.753547  298546 system_pods.go:89] "csi-hostpath-attacher-0" [fa6816e6-f586-4d7a-9bbb-8054791d8ceb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0903 23:06:42.753565  298546 system_pods.go:89] "csi-hostpath-resizer-0" [3f3b308d-7a3c-49cd-8fe8-0eb7feadcba0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0903 23:06:42.753574  298546 system_pods.go:89] "csi-hostpathplugin-9knw6" [5440dc0c-60c2-4c03-8d40-2b333ca89855] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0903 23:06:42.753579  298546 system_pods.go:89] "etcd-addons-250903" [ee80fbaf-09cf-4cd3-b35f-ff7c595da819] Running
	I0903 23:06:42.753584  298546 system_pods.go:89] "kindnet-5rbmb" [e40dca87-dbeb-4db1-bb70-163996f6e4bb] Running
	I0903 23:06:42.753589  298546 system_pods.go:89] "kube-apiserver-addons-250903" [b4d2da16-6233-4333-9e10-ab35b28e253e] Running
	I0903 23:06:42.753593  298546 system_pods.go:89] "kube-controller-manager-addons-250903" [5628c7ef-41a9-4a85-bd47-b6835959aeae] Running
	I0903 23:06:42.753602  298546 system_pods.go:89] "kube-ingress-dns-minikube" [fdd48c20-2391-4056-865e-fc63e06cf0eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0903 23:06:42.753607  298546 system_pods.go:89] "kube-proxy-72qr6" [753d8202-3500-45b9-b1bb-72c1da1e96d5] Running
	I0903 23:06:42.753620  298546 system_pods.go:89] "kube-scheduler-addons-250903" [81747c4e-b953-4d0f-8377-105f9ac0210d] Running
	I0903 23:06:42.753631  298546 system_pods.go:89] "metrics-server-85b7d694d7-568v4" [412ee54d-941e-468e-870a-7a1c62c864e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:06:42.753644  298546 system_pods.go:89] "nvidia-device-plugin-daemonset-vzwcd" [413a92e7-6b19-427c-b018-fa6ead381c02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0903 23:06:42.753654  298546 system_pods.go:89] "registry-66898fdd98-qrfkz" [c35fb6f1-289a-424d-978e-8267be95ebd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0903 23:06:42.753665  298546 system_pods.go:89] "registry-creds-764b6fb674-f9vpw" [871e7382-5284-4275-9eaa-cd1c236e5fd8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0903 23:06:42.753672  298546 system_pods.go:89] "registry-proxy-xk2h9" [f969f46b-d8ef-4f8e-b119-afb8070f1e80] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0903 23:06:42.753679  298546 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mdgkw" [5ee8ba09-4995-4f14-9dcf-fbbf30826c7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0903 23:06:42.753686  298546 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tr4mf" [bf502b1b-5a75-43cf-a5f1-b96970808cb0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0903 23:06:42.753693  298546 system_pods.go:89] "storage-provisioner" [a0f2c0ae-ad55-4da5-9a0e-5afebffbf6c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0903 23:06:42.753714  298546 retry.go:31] will retry after 591.888419ms: missing components: kube-dns
	I0903 23:06:42.842525  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:43.068319  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:43.120314  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:43.126388  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:43.338193  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:43.351107  298546 system_pods.go:86] 19 kube-system pods found
	I0903 23:06:43.351143  298546 system_pods.go:89] "coredns-66bc5c9577-2vcg7" [fe698436-ba8f-4799-8e14-8280e3fb336c] Running
	I0903 23:06:43.351154  298546 system_pods.go:89] "csi-hostpath-attacher-0" [fa6816e6-f586-4d7a-9bbb-8054791d8ceb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0903 23:06:43.351161  298546 system_pods.go:89] "csi-hostpath-resizer-0" [3f3b308d-7a3c-49cd-8fe8-0eb7feadcba0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0903 23:06:43.351168  298546 system_pods.go:89] "csi-hostpathplugin-9knw6" [5440dc0c-60c2-4c03-8d40-2b333ca89855] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0903 23:06:43.351172  298546 system_pods.go:89] "etcd-addons-250903" [ee80fbaf-09cf-4cd3-b35f-ff7c595da819] Running
	I0903 23:06:43.351177  298546 system_pods.go:89] "kindnet-5rbmb" [e40dca87-dbeb-4db1-bb70-163996f6e4bb] Running
	I0903 23:06:43.351182  298546 system_pods.go:89] "kube-apiserver-addons-250903" [b4d2da16-6233-4333-9e10-ab35b28e253e] Running
	I0903 23:06:43.351186  298546 system_pods.go:89] "kube-controller-manager-addons-250903" [5628c7ef-41a9-4a85-bd47-b6835959aeae] Running
	I0903 23:06:43.351193  298546 system_pods.go:89] "kube-ingress-dns-minikube" [fdd48c20-2391-4056-865e-fc63e06cf0eb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0903 23:06:43.351197  298546 system_pods.go:89] "kube-proxy-72qr6" [753d8202-3500-45b9-b1bb-72c1da1e96d5] Running
	I0903 23:06:43.351202  298546 system_pods.go:89] "kube-scheduler-addons-250903" [81747c4e-b953-4d0f-8377-105f9ac0210d] Running
	I0903 23:06:43.351213  298546 system_pods.go:89] "metrics-server-85b7d694d7-568v4" [412ee54d-941e-468e-870a-7a1c62c864e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0903 23:06:43.351221  298546 system_pods.go:89] "nvidia-device-plugin-daemonset-vzwcd" [413a92e7-6b19-427c-b018-fa6ead381c02] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0903 23:06:43.351230  298546 system_pods.go:89] "registry-66898fdd98-qrfkz" [c35fb6f1-289a-424d-978e-8267be95ebd9] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0903 23:06:43.351241  298546 system_pods.go:89] "registry-creds-764b6fb674-f9vpw" [871e7382-5284-4275-9eaa-cd1c236e5fd8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0903 23:06:43.351249  298546 system_pods.go:89] "registry-proxy-xk2h9" [f969f46b-d8ef-4f8e-b119-afb8070f1e80] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0903 23:06:43.351259  298546 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mdgkw" [5ee8ba09-4995-4f14-9dcf-fbbf30826c7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0903 23:06:43.351265  298546 system_pods.go:89] "snapshot-controller-7d9fbc56b8-tr4mf" [bf502b1b-5a75-43cf-a5f1-b96970808cb0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0903 23:06:43.351271  298546 system_pods.go:89] "storage-provisioner" [a0f2c0ae-ad55-4da5-9a0e-5afebffbf6c2] Running
	I0903 23:06:43.351284  298546 system_pods.go:126] duration metric: took 1.720768315s to wait for k8s-apps to be running ...
	I0903 23:06:43.351300  298546 system_svc.go:44] waiting for kubelet service to be running ....
	I0903 23:06:43.351358  298546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:06:43.368476  298546 system_svc.go:56] duration metric: took 17.165435ms WaitForService to wait for kubelet
	I0903 23:06:43.368551  298546 kubeadm.go:578] duration metric: took 46.501670516s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:06:43.368584  298546 node_conditions.go:102] verifying NodePressure condition ...
	I0903 23:06:43.372354  298546 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0903 23:06:43.372390  298546 node_conditions.go:123] node cpu capacity is 2
	I0903 23:06:43.372405  298546 node_conditions.go:105] duration metric: took 3.801044ms to run NodePressure ...
	I0903 23:06:43.372418  298546 start.go:241] waiting for startup goroutines ...
	I0903 23:06:43.557382  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:43.619367  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:43.622711  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:43.844357  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:44.057431  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:44.119409  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:44.121879  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:44.338125  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:44.556821  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:44.617871  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:44.620406  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:44.838428  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:45.078021  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:45.182032  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:45.182420  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:45.339898  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:45.556221  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:45.617489  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:45.620054  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:45.846099  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:46.056785  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:46.123318  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:46.138149  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:46.338226  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:46.557510  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:46.618368  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:46.620682  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:46.838368  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:47.057398  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:47.132504  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:47.132896  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:47.338901  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:47.556819  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:47.617993  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:47.621349  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:47.837953  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:48.057104  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:48.117891  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:48.119975  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:48.338362  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:48.558088  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:48.618124  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:48.620628  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:48.839395  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:49.057386  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:49.172784  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:49.173263  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:49.338886  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:49.556147  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:49.617239  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:49.625986  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:49.837864  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:50.057332  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:50.123742  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:50.123992  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:50.338397  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:50.557516  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:50.658278  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:50.658580  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:50.839070  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:51.056552  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:51.123543  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:51.125665  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:51.338697  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:51.557305  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:51.617371  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:51.619953  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:51.839040  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:52.056897  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:52.118146  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:52.121026  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:52.340122  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:52.557676  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:52.619631  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:52.622846  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:52.838188  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:53.057811  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:53.129355  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:53.131421  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:53.339253  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:53.556180  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:53.617167  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:53.619918  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:53.839365  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:54.057602  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:54.118687  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:54.122921  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:54.337859  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:54.558125  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:54.618087  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:54.621245  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:54.838570  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:55.057246  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:55.119769  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:55.130047  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:55.337834  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:55.557367  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:55.617816  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:55.621078  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:55.841167  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:56.057714  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:56.119049  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:56.120865  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:56.337753  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:56.558114  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:56.619199  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:56.620924  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:56.838322  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:57.056944  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:57.147818  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:57.148232  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:57.339827  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:57.557411  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:57.617758  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:57.621290  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:57.841785  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:58.056727  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:58.117867  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:58.120407  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:58.338604  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:58.557594  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:58.617155  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:58.620282  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:58.838492  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:59.057219  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:59.119848  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:59.121166  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:59.338385  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:06:59.558944  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:06:59.624049  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:06:59.624420  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:06:59.838550  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:00.029996  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 23:07:00.075573  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:00.202214  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:00.204328  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:00.338928  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:00.560303  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:00.658661  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:00.659546  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:00.839076  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:01.061282  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:01.164719  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:01.166089  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:01.253226  298546 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.223192321s)
	W0903 23:07:01.253266  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:07:01.253286  298546 retry.go:31] will retry after 19.159780156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0903 23:07:01.338652  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:01.557413  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:01.658295  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:01.658494  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:01.838493  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:02.057053  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:02.122143  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:02.124558  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:02.338275  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:02.559801  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:02.621047  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:02.622019  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:02.838543  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:03.057941  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:03.122363  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:03.122776  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:03.338963  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:03.559055  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:03.620178  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:03.620456  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:03.838535  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:04.058044  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:04.119023  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:04.121447  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:04.339940  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:04.564787  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:04.621073  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:04.621847  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:04.838908  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:05.057945  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:05.127643  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:05.128068  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:05.338734  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:05.560272  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:05.621178  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:05.621361  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:05.839959  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:06.057169  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:06.117572  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:06.122510  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:06.342261  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:06.560131  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:06.663456  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:06.665041  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:06.838894  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:07.057613  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:07.117425  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:07.129542  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:07.338932  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:07.560251  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:07.617812  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:07.620340  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:07.838403  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:08.057098  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:08.121286  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:08.122739  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:08.337791  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:08.558087  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:08.658440  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:08.658683  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:08.838295  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:09.057467  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:09.145790  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:09.148467  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:09.339587  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:09.557807  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:09.619273  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:09.622465  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:09.839691  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:10.059908  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:10.122755  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:10.126904  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:10.340380  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:10.557379  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:10.618310  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:10.621054  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:10.838337  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:11.059387  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:11.123913  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:11.124320  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:11.338187  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:11.556694  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:11.618953  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:11.621205  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:11.838574  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:12.057610  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:12.117458  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:12.124130  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:12.341019  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:12.558854  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:12.617625  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:12.620538  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:12.846643  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:13.059151  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:13.119796  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:13.122297  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:13.338753  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:13.557508  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:13.617572  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:13.619930  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:13.840199  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:14.058290  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:14.118020  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:14.126135  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:14.340405  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:14.557003  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:14.617827  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:14.620198  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:14.840647  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:15.058723  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:15.127324  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:15.127539  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:15.338192  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:15.557235  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:15.618801  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:15.620555  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:15.840162  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:16.058825  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:16.124229  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:16.126885  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:16.337997  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:16.556186  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:16.617132  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:16.619837  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:16.838922  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:17.056175  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:17.119048  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:17.124459  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:17.337787  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:17.556776  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:17.617789  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:17.620023  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:17.838869  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:18.057304  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:18.117651  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:18.119870  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:18.338038  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:18.556960  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:18.618830  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:18.623170  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:18.838188  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:19.057085  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:19.122224  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:19.125207  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:19.337955  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:19.557467  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:19.617961  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:19.621634  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:19.838331  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:20.056635  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:20.121056  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:20.125412  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:20.338674  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:20.413849  298546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0903 23:07:20.556294  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:20.617735  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0903 23:07:20.620716  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:20.838780  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:21.060839  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:21.150697  298546 kapi.go:107] duration metric: took 1m18.036503454s to wait for kubernetes.io/minikube-addons=registry ...
	I0903 23:07:21.153003  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0903 23:07:21.286792  298546 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0903 23:07:21.286882  298546 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0903 23:07:21.338012  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:21.564112  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:21.627411  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:21.838170  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:22.057068  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:22.125118  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:22.338785  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:22.557229  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:22.620320  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:22.839575  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:23.056986  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:23.125514  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:23.356211  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:23.558685  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:23.621683  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:23.838110  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:24.057955  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:24.129331  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:24.340493  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:24.556683  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:24.622035  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:24.839031  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:25.056580  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:25.126761  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:25.337873  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:25.557925  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:25.621136  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:25.838123  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:26.057207  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:26.127010  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:26.338025  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:26.557099  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:26.620660  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:26.839410  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:27.057886  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:27.136969  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:27.339094  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:27.557854  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:27.621586  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:27.838515  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:28.072697  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:28.128140  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:28.339016  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:28.557200  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:28.620665  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:28.838544  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:29.064599  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:29.126588  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:29.338582  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:29.556931  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:29.621150  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:29.838465  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:30.067704  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:30.123331  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:30.338815  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:30.557910  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:30.628110  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:30.839077  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:31.057121  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:31.144617  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:31.338780  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:31.565613  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:31.622234  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:31.840346  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:32.074599  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:32.170632  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:32.340576  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:32.559479  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:32.620980  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:32.838513  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:33.058799  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:33.165726  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:33.350346  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:33.559203  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:33.621802  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:33.838905  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:34.067168  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:34.168517  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:34.342995  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:34.557431  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:34.620609  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:34.838323  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:35.087924  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:35.176237  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:35.338176  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:35.558424  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:35.621370  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:35.837893  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:36.057924  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:36.130374  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:36.339574  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:36.558164  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:36.620465  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:36.839165  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:37.058158  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:37.130984  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:37.338756  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:37.557617  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:37.621041  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:37.838477  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:38.059886  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:38.125394  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:38.338398  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:38.558729  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:38.626585  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:38.844575  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:39.066635  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:39.166400  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:39.341971  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:39.556369  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:39.628486  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:39.845500  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:40.063022  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:40.131773  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:40.338477  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:40.556850  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:40.621766  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:40.844728  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:41.066951  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:41.125156  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:41.337703  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:41.557053  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:41.621484  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:41.839339  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:42.059972  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:42.164723  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:42.339541  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:42.556749  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:42.622088  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:42.839969  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:43.060315  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:43.126910  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:43.337888  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:43.557002  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:43.627375  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:43.839185  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:44.057125  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:44.165938  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:44.341512  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:44.557130  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:44.620752  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:44.842828  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:45.061630  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:45.128497  298546 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0903 23:07:45.348109  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:45.558233  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:45.622400  298546 kapi.go:107] duration metric: took 1m42.505314997s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0903 23:07:45.838895  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:46.056763  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:46.338229  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:46.556619  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:46.840843  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:47.057761  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:47.338324  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:47.557119  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:47.838925  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:48.056731  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:48.338026  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:48.567452  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:48.838724  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:49.058140  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:49.338684  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0903 23:07:49.557285  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:49.838974  298546 kapi.go:107] duration metric: took 1m42.004186045s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0903 23:07:49.842857  298546 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-250903 cluster.
	I0903 23:07:49.846433  298546 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0903 23:07:49.849901  298546 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0903 23:07:50.057056  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:50.556673  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:51.057917  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:51.557324  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:52.057379  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:52.557450  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:53.057723  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:53.557988  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:54.056847  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:54.556563  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:55.056518  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:55.559423  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:56.057207  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:56.556623  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:57.056613  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:57.562507  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:58.069972  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:58.556517  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:59.056566  298546 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0903 23:07:59.557040  298546 kapi.go:107] duration metric: took 1m56.004011303s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0903 23:07:59.560448  298546 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, ingress-dns, metrics-server, default-storageclass, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0903 23:07:59.563537  298546 addons.go:514] duration metric: took 2m2.695119167s for enable addons: enabled=[cloud-spanner storage-provisioner amd-gpu-device-plugin nvidia-device-plugin registry-creds ingress-dns metrics-server default-storageclass yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0903 23:07:59.563587  298546 start.go:246] waiting for cluster config update ...
	I0903 23:07:59.563625  298546 start.go:255] writing updated cluster config ...
	I0903 23:07:59.563965  298546 ssh_runner.go:195] Run: rm -f paused
	I0903 23:07:59.568472  298546 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:07:59.571583  298546 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2vcg7" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:07:59.578090  298546 pod_ready.go:94] pod "coredns-66bc5c9577-2vcg7" is "Ready"
	I0903 23:07:59.578119  298546 pod_ready.go:86] duration metric: took 6.505263ms for pod "coredns-66bc5c9577-2vcg7" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:07:59.580382  298546 pod_ready.go:83] waiting for pod "etcd-addons-250903" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:07:59.584944  298546 pod_ready.go:94] pod "etcd-addons-250903" is "Ready"
	I0903 23:07:59.585019  298546 pod_ready.go:86] duration metric: took 4.606793ms for pod "etcd-addons-250903" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:07:59.588530  298546 pod_ready.go:83] waiting for pod "kube-apiserver-addons-250903" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:07:59.593759  298546 pod_ready.go:94] pod "kube-apiserver-addons-250903" is "Ready"
	I0903 23:07:59.593786  298546 pod_ready.go:86] duration metric: took 5.229088ms for pod "kube-apiserver-addons-250903" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:07:59.596041  298546 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-250903" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:07:59.972335  298546 pod_ready.go:94] pod "kube-controller-manager-addons-250903" is "Ready"
	I0903 23:07:59.972361  298546 pod_ready.go:86] duration metric: took 376.284136ms for pod "kube-controller-manager-addons-250903" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:00.187403  298546 pod_ready.go:83] waiting for pod "kube-proxy-72qr6" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:00.574505  298546 pod_ready.go:94] pod "kube-proxy-72qr6" is "Ready"
	I0903 23:08:00.574544  298546 pod_ready.go:86] duration metric: took 387.009401ms for pod "kube-proxy-72qr6" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:00.772970  298546 pod_ready.go:83] waiting for pod "kube-scheduler-addons-250903" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:01.172584  298546 pod_ready.go:94] pod "kube-scheduler-addons-250903" is "Ready"
	I0903 23:08:01.172614  298546 pod_ready.go:86] duration metric: took 399.612828ms for pod "kube-scheduler-addons-250903" in "kube-system" namespace to be "Ready" or be gone ...
	I0903 23:08:01.172628  298546 pod_ready.go:40] duration metric: took 1.604117543s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0903 23:08:01.229411  298546 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0903 23:08:01.232658  298546 out.go:179] * Done! kubectl is now configured to use "addons-250903" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 03 23:10:55 addons-250903 crio[985]: time="2025-09-03 23:10:55.747867652Z" level=info msg="Removed pod sandbox: 9aa353020804f9d4930d65676580d31d228c10dac6383beb86a7b7beb19e1d4b" id=a921ce09-343e-47fe-9c46-8f366d93ebc2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 03 23:11:15 addons-250903 crio[985]: time="2025-09-03 23:11:15.598739719Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-mbj6d/POD" id=e4b6a06c-dabd-40d0-92c7-ae484d87463b name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 03 23:11:15 addons-250903 crio[985]: time="2025-09-03 23:11:15.598802580Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 03 23:11:15 addons-250903 crio[985]: time="2025-09-03 23:11:15.638311007Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-mbj6d Namespace:default ID:d2bbdd34dfc5b831fc6d6db170c08394e9fbacbc9e475c892828786b2c5e73d1 UID:e4db5924-9a2e-4040-8f7a-dcb80ca82f10 NetNS:/var/run/netns/0fc632e4-9ce8-4f70-a30c-f94acf432817 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 03 23:11:15 addons-250903 crio[985]: time="2025-09-03 23:11:15.638508286Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-mbj6d to CNI network \"kindnet\" (type=ptp)"
	Sep 03 23:11:15 addons-250903 crio[985]: time="2025-09-03 23:11:15.653551582Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-mbj6d Namespace:default ID:d2bbdd34dfc5b831fc6d6db170c08394e9fbacbc9e475c892828786b2c5e73d1 UID:e4db5924-9a2e-4040-8f7a-dcb80ca82f10 NetNS:/var/run/netns/0fc632e4-9ce8-4f70-a30c-f94acf432817 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 03 23:11:15 addons-250903 crio[985]: time="2025-09-03 23:11:15.653700565Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-mbj6d for CNI network kindnet (type=ptp)"
	Sep 03 23:11:15 addons-250903 crio[985]: time="2025-09-03 23:11:15.657290664Z" level=info msg="Ran pod sandbox d2bbdd34dfc5b831fc6d6db170c08394e9fbacbc9e475c892828786b2c5e73d1 with infra container: default/hello-world-app-5d498dc89-mbj6d/POD" id=e4b6a06c-dabd-40d0-92c7-ae484d87463b name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 03 23:11:15 addons-250903 crio[985]: time="2025-09-03 23:11:15.658510918Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=82d3b213-312f-4d36-a056-b7d269039948 name=/runtime.v1.ImageService/ImageStatus
	Sep 03 23:11:15 addons-250903 crio[985]: time="2025-09-03 23:11:15.658726528Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=82d3b213-312f-4d36-a056-b7d269039948 name=/runtime.v1.ImageService/ImageStatus
	Sep 03 23:11:15 addons-250903 crio[985]: time="2025-09-03 23:11:15.659643918Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=e332072d-668e-4b96-9292-1ae2d4ad0b3b name=/runtime.v1.ImageService/PullImage
	Sep 03 23:11:15 addons-250903 crio[985]: time="2025-09-03 23:11:15.662097506Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 03 23:11:15 addons-250903 crio[985]: time="2025-09-03 23:11:15.927069038Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 03 23:11:16 addons-250903 crio[985]: time="2025-09-03 23:11:16.665275189Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=e332072d-668e-4b96-9292-1ae2d4ad0b3b name=/runtime.v1.ImageService/PullImage
	Sep 03 23:11:16 addons-250903 crio[985]: time="2025-09-03 23:11:16.666392632Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=220d19d4-1fa6-430f-a237-ec36ab414779 name=/runtime.v1.ImageService/ImageStatus
	Sep 03 23:11:16 addons-250903 crio[985]: time="2025-09-03 23:11:16.667094863Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=220d19d4-1fa6-430f-a237-ec36ab414779 name=/runtime.v1.ImageService/ImageStatus
	Sep 03 23:11:16 addons-250903 crio[985]: time="2025-09-03 23:11:16.668989106Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=faad8fee-bd1f-4ac0-9e7c-478d0834abab name=/runtime.v1.ImageService/ImageStatus
	Sep 03 23:11:16 addons-250903 crio[985]: time="2025-09-03 23:11:16.669690082Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=faad8fee-bd1f-4ac0-9e7c-478d0834abab name=/runtime.v1.ImageService/ImageStatus
	Sep 03 23:11:16 addons-250903 crio[985]: time="2025-09-03 23:11:16.676878452Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-mbj6d/hello-world-app" id=48c5df9b-461d-49fa-80cc-308134801ccb name=/runtime.v1.RuntimeService/CreateContainer
	Sep 03 23:11:16 addons-250903 crio[985]: time="2025-09-03 23:11:16.676990963Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 03 23:11:16 addons-250903 crio[985]: time="2025-09-03 23:11:16.705541324Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9b83e02f4873d39e9d5d9c47c7bdbcdf2e6dadf34a0b80d0f61e8019341ebc22/merged/etc/passwd: no such file or directory"
	Sep 03 23:11:16 addons-250903 crio[985]: time="2025-09-03 23:11:16.705839257Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9b83e02f4873d39e9d5d9c47c7bdbcdf2e6dadf34a0b80d0f61e8019341ebc22/merged/etc/group: no such file or directory"
	Sep 03 23:11:16 addons-250903 crio[985]: time="2025-09-03 23:11:16.801398066Z" level=info msg="Created container d81d4d5c9e5d5f982e268e7841aa758ced420abfb3c4189eb2553bac2e413b24: default/hello-world-app-5d498dc89-mbj6d/hello-world-app" id=48c5df9b-461d-49fa-80cc-308134801ccb name=/runtime.v1.RuntimeService/CreateContainer
	Sep 03 23:11:16 addons-250903 crio[985]: time="2025-09-03 23:11:16.802234766Z" level=info msg="Starting container: d81d4d5c9e5d5f982e268e7841aa758ced420abfb3c4189eb2553bac2e413b24" id=304b5b43-633e-4206-a5fa-1a4fa41ae8d3 name=/runtime.v1.RuntimeService/StartContainer
	Sep 03 23:11:16 addons-250903 crio[985]: time="2025-09-03 23:11:16.823603099Z" level=info msg="Started container" PID=9327 containerID=d81d4d5c9e5d5f982e268e7841aa758ced420abfb3c4189eb2553bac2e413b24 description=default/hello-world-app-5d498dc89-mbj6d/hello-world-app id=304b5b43-633e-4206-a5fa-1a4fa41ae8d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d2bbdd34dfc5b831fc6d6db170c08394e9fbacbc9e475c892828786b2c5e73d1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	d81d4d5c9e5d5       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   d2bbdd34dfc5b       hello-world-app-5d498dc89-mbj6d
	d66c30febc8cf       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   0476d564ab260       nginx
	2ebf9c5078624       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   fd3a1b07552dd       busybox
	d2ade34403e02       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago            Running             controller                0                   d2de07e12acfc       ingress-nginx-controller-9cc49f96f-lps2h
	d615ad4485dbc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:b3f8a40cecf84afd8a5299442eab04c52f913ef6194e01dc4fbeb783f9d42c58            3 minutes ago            Running             gadget                    0                   56eb3fd72fbeb       gadget-vfnbb
	ae1b08c6eab14       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             3 minutes ago            Running             local-path-provisioner    0                   13919567e4fc3       local-path-provisioner-648f6765c9-54lgh
	58c210c206918       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958               4 minutes ago            Running             minikube-ingress-dns      0                   dcdc65051cb34       kube-ingress-dns-minikube
	48faa1a717280       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                             4 minutes ago            Exited              patch                     1                   29725779bf9fa       ingress-nginx-admission-patch-6rp9f
	ec1069fb1cc4d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   4 minutes ago            Exited              create                    0                   ed38f3a95d581       ingress-nginx-admission-create-pp4fx
	ed4615eebb12e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago            Running             storage-provisioner       0                   377aa1d2c68f0       storage-provisioner
	23112b92c568f       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                             4 minutes ago            Running             coredns                   0                   efc3297a1f547       coredns-66bc5c9577-2vcg7
	9e0b60cd641b9       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                                             5 minutes ago            Running             kube-proxy                0                   583341aae3bd4       kube-proxy-72qr6
	05d3245d2d1c4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                             5 minutes ago            Running             kindnet-cni               0                   9f913d82f2c45       kindnet-5rbmb
	9a90162d349d4       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                                             5 minutes ago            Running             kube-controller-manager   0                   1a7058bfe503b       kube-controller-manager-addons-250903
	8e815d0f91bf9       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                                             5 minutes ago            Running             kube-scheduler            0                   58701ec46c2b4       kube-scheduler-addons-250903
	d0524f4b4302c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                             5 minutes ago            Running             etcd                      0                   36033d18517db       etcd-addons-250903
	ea6eebdbe4ae0       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                                             5 minutes ago            Running             kube-apiserver            0                   b2efd7b51e6f1       kube-apiserver-addons-250903
	
	
	==> coredns [23112b92c568fcb459e20903e7605ba739454950f89990d56b378ecd31d04c4b] <==
	[INFO] 10.244.0.12:34673 - 22411 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003451507s
	[INFO] 10.244.0.12:34673 - 33871 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000138202s
	[INFO] 10.244.0.12:34673 - 5842 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000231988s
	[INFO] 10.244.0.12:55302 - 42751 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000190453s
	[INFO] 10.244.0.12:55302 - 42265 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000242342s
	[INFO] 10.244.0.12:54297 - 2186 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000120995s
	[INFO] 10.244.0.12:54297 - 2407 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119691s
	[INFO] 10.244.0.12:49006 - 47139 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094787s
	[INFO] 10.244.0.12:49006 - 46950 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010227s
	[INFO] 10.244.0.12:58382 - 52436 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001334038s
	[INFO] 10.244.0.12:58382 - 51991 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001296654s
	[INFO] 10.244.0.12:47680 - 12340 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125877s
	[INFO] 10.244.0.12:47680 - 12200 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147161s
	[INFO] 10.244.0.21:54606 - 22 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000276665s
	[INFO] 10.244.0.21:60338 - 39540 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000354139s
	[INFO] 10.244.0.21:50365 - 24401 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145742s
	[INFO] 10.244.0.21:46118 - 50215 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000158141s
	[INFO] 10.244.0.21:45408 - 9644 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139949s
	[INFO] 10.244.0.21:45553 - 27023 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088822s
	[INFO] 10.244.0.21:41217 - 12167 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002106515s
	[INFO] 10.244.0.21:41273 - 36631 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00235516s
	[INFO] 10.244.0.21:59468 - 39363 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.005240338s
	[INFO] 10.244.0.21:38568 - 32710 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.005424579s
	[INFO] 10.244.0.24:59233 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000146119s
	[INFO] 10.244.0.24:33343 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000108777s
	
	
	==> describe nodes <==
	Name:               addons-250903
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-250903
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b3583632deefb20d71cab8d8ac0a8c3504aed1fb
	                    minikube.k8s.io/name=addons-250903
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_03T23_05_52_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-250903
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Sep 2025 23:05:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-250903
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Sep 2025 23:11:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Sep 2025 23:09:55 +0000   Wed, 03 Sep 2025 23:05:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Sep 2025 23:09:55 +0000   Wed, 03 Sep 2025 23:05:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Sep 2025 23:09:55 +0000   Wed, 03 Sep 2025 23:05:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Sep 2025 23:09:55 +0000   Wed, 03 Sep 2025 23:06:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-250903
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 0136a138c577428a9a4417dc305e03b5
	  System UUID:                8e517655-149d-4ff0-b10b-0130e0ea5a24
	  Boot ID:                    ac4c6e80-ebf0-4144-873c-f370ad8320a2
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m15s
	  default                     hello-world-app-5d498dc89-mbj6d             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-vfnbb                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-lps2h    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m15s
	  kube-system                 coredns-66bc5c9577-2vcg7                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m21s
	  kube-system                 etcd-addons-250903                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m26s
	  kube-system                 kindnet-5rbmb                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m20s
	  kube-system                 kube-apiserver-addons-250903                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-addons-250903       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-proxy-72qr6                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-scheduler-addons-250903                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  local-path-storage          local-path-provisioner-648f6765c9-54lgh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m14s                  kube-proxy       
	  Normal   Starting                 5m33s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m33s (x8 over 5m33s)  kubelet          Node addons-250903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m33s (x8 over 5m33s)  kubelet          Node addons-250903 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m33s (x8 over 5m33s)  kubelet          Node addons-250903 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m26s                  kubelet          Node addons-250903 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m26s                  kubelet          Node addons-250903 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m26s                  kubelet          Node addons-250903 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m22s                  node-controller  Node addons-250903 event: Registered Node addons-250903 in Controller
	  Normal   NodeReady                4m36s                  kubelet          Node addons-250903 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 3 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015921] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.500409] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.035436] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.711023] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.382302] kauditd_printk_skb: 36 callbacks suppressed
	[Sep 3 22:07] hrtimer: interrupt took 14834393 ns
	[Sep 3 22:39] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep 3 23:04] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [d0524f4b4302c645e1b8ff690b1555b5908bcdacc558f4af915e0bd697d90154] <==
	{"level":"warn","ts":"2025-09-03T23:05:47.568110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.588820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.621234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.639905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.668633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.682976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.698222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.716295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.733170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.750772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.771351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.789633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.802703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.856480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.894394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.914808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:47.927853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:05:48.013740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55698","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-03T23:06:00.412326Z","caller":"traceutil/trace.go:172","msg":"trace[572024085] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"107.512273ms","start":"2025-09-03T23:06:00.304792Z","end":"2025-09-03T23:06:00.412304Z","steps":["trace[572024085] 'process raft request'  (duration: 26.418357ms)","trace[572024085] 'compare'  (duration: 25.76173ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-03T23:06:03.903978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:06:04.012363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:06:25.881444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:06:25.895355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:06:25.929067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-03T23:06:25.944761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42698","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:11:17 up  1:53,  0 users,  load average: 0.52, 1.72, 2.62
	Linux addons-250903 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [05d3245d2d1c4e9f3f3f1603e65225a4a815b345fa8330884f1d181c9f120f26] <==
	I0903 23:09:10.680334       1 main.go:301] handling current node
	I0903 23:09:20.680059       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0903 23:09:20.680099       1 main.go:301] handling current node
	I0903 23:09:30.680946       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0903 23:09:30.680980       1 main.go:301] handling current node
	I0903 23:09:40.680098       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0903 23:09:40.680224       1 main.go:301] handling current node
	I0903 23:09:50.680850       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0903 23:09:50.680982       1 main.go:301] handling current node
	I0903 23:10:00.724992       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0903 23:10:00.725037       1 main.go:301] handling current node
	I0903 23:10:10.680025       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0903 23:10:10.680073       1 main.go:301] handling current node
	I0903 23:10:20.680796       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0903 23:10:20.680830       1 main.go:301] handling current node
	I0903 23:10:30.680298       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0903 23:10:30.680410       1 main.go:301] handling current node
	I0903 23:10:40.680367       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0903 23:10:40.680410       1 main.go:301] handling current node
	I0903 23:10:50.679927       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0903 23:10:50.680046       1 main.go:301] handling current node
	I0903 23:11:00.680125       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0903 23:11:00.680169       1 main.go:301] handling current node
	I0903 23:11:10.680301       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0903 23:11:10.680427       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ea6eebdbe4ae0ea59aea7e1e96ace435c1c056a4709a348cc2adda6219f0a7bc] <==
	E0903 23:08:12.713643       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34372: use of closed network connection
	I0903 23:08:22.094156       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.55.199"}
	I0903 23:08:45.202950       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0903 23:08:52.013784       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0903 23:08:52.542527       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.132.173"}
	I0903 23:09:02.858319       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0903 23:09:08.707766       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:09:18.096236       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0903 23:09:18.096400       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0903 23:09:18.147934       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0903 23:09:18.148061       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0903 23:09:18.170407       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0903 23:09:18.170518       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0903 23:09:18.173313       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0903 23:09:18.173355       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0903 23:09:18.225076       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0903 23:09:18.226687       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0903 23:09:19.171298       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0903 23:09:19.231587       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0903 23:09:19.301624       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0903 23:09:39.610806       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:10:08.826636       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:11:06.263317       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:11:12.171004       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0903 23:11:15.470471       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.128.139"}
	
	
	==> kube-controller-manager [9a90162d349d425d3ea96981999187b9f4d816687e5e56d267f65dbae15a7c2a] <==
	E0903 23:09:27.530341       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 23:09:27.531448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 23:09:28.968098       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 23:09:28.969183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 23:09:37.010130       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 23:09:37.011350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 23:09:38.551419       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 23:09:38.552510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 23:09:39.962226       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 23:09:39.963245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0903 23:09:49.310200       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	E0903 23:09:56.127723       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 23:09:56.128718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 23:09:56.612184       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 23:09:56.613500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 23:10:01.821667       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 23:10:01.822918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 23:10:32.081794       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 23:10:32.082839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 23:10:46.932975       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 23:10:46.933971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 23:10:49.521558       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 23:10:49.522560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0903 23:11:17.166789       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0903 23:11:17.167988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [9e0b60cd641b9af9581e6a117871b333dc9525b0e7eacbc6e15a5aabdd15200b] <==
	I0903 23:06:02.697703       1 server_linux.go:53] "Using iptables proxy"
	I0903 23:06:02.950212       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0903 23:06:03.150597       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0903 23:06:03.150706       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0903 23:06:03.150813       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0903 23:06:03.240994       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0903 23:06:03.241079       1 server_linux.go:132] "Using iptables Proxier"
	I0903 23:06:03.258084       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0903 23:06:03.258477       1 server.go:527] "Version info" version="v1.34.0"
	I0903 23:06:03.258541       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0903 23:06:03.261038       1 config.go:200] "Starting service config controller"
	I0903 23:06:03.261068       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0903 23:06:03.266620       1 config.go:106] "Starting endpoint slice config controller"
	I0903 23:06:03.266721       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0903 23:06:03.266766       1 config.go:403] "Starting serviceCIDR config controller"
	I0903 23:06:03.266794       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0903 23:06:03.267484       1 config.go:309] "Starting node config controller"
	I0903 23:06:03.267544       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0903 23:06:03.267575       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0903 23:06:03.369167       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0903 23:06:03.378314       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0903 23:06:03.507871       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8e815d0f91bf923eb7881c88699a4cff12ef1d0605a5f8c44df69a872062108f] <==
	E0903 23:05:48.996797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0903 23:05:48.996919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0903 23:05:48.997019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0903 23:05:48.997112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0903 23:05:48.997207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0903 23:05:48.997330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0903 23:05:48.997418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0903 23:05:48.997520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0903 23:05:48.997612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0903 23:05:49.000298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0903 23:05:49.000549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0903 23:05:49.001599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0903 23:05:49.001776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0903 23:05:49.001942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0903 23:05:49.002170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0903 23:05:49.803563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0903 23:05:49.803773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0903 23:05:49.848480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0903 23:05:49.994376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0903 23:05:50.036046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0903 23:05:50.057321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0903 23:05:50.076644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0903 23:05:50.113131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0903 23:05:50.425928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I0903 23:05:53.090845       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.706818    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5a2456496da34ee1ec9d2c92f46a740d6f7893673bad9bcaf952829e4d5f4854/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5a2456496da34ee1ec9d2c92f46a740d6f7893673bad9bcaf952829e4d5f4854/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.712192    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5a2456496da34ee1ec9d2c92f46a740d6f7893673bad9bcaf952829e4d5f4854/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5a2456496da34ee1ec9d2c92f46a740d6f7893673bad9bcaf952829e4d5f4854/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.715643    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/46185caa8295dd872fdfb5deb5f0c1aba7938cc8b432cd5d66396a1fc337d13c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/46185caa8295dd872fdfb5deb5f0c1aba7938cc8b432cd5d66396a1fc337d13c/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.716722    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2415f40cd8d6af59759fbed4482e7feb6b307e0b61954bd956361e2b862d9702/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2415f40cd8d6af59759fbed4482e7feb6b307e0b61954bd956361e2b862d9702/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.724122    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/706421ba6e9c3b409a3a43035ceca0a8bc68876260839ee59e98a9c51f19788f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/706421ba6e9c3b409a3a43035ceca0a8bc68876260839ee59e98a9c51f19788f/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.725287    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/39c6580d87fcbddbccba2da91f2ca7de4ee2f3ab7a65aef852ad0bee678b23cc/diff" to get inode usage: stat /var/lib/containers/storage/overlay/39c6580d87fcbddbccba2da91f2ca7de4ee2f3ab7a65aef852ad0bee678b23cc/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.730553    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/006fa974f1302b898863e032797719130b559095d3a9d6f332391ccc1f8bbb8a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/006fa974f1302b898863e032797719130b559095d3a9d6f332391ccc1f8bbb8a/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.731826    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/706421ba6e9c3b409a3a43035ceca0a8bc68876260839ee59e98a9c51f19788f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/706421ba6e9c3b409a3a43035ceca0a8bc68876260839ee59e98a9c51f19788f/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.732938    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b1cfdf507f0e2a194cff306fbf6e7b335f66ce169f47ac7d34a01c037b2bf1f8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b1cfdf507f0e2a194cff306fbf6e7b335f66ce169f47ac7d34a01c037b2bf1f8/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.759454    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5231e78cf00b180e632da20765efad3d2fea7dc6f975d696079f53f2168e5562/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5231e78cf00b180e632da20765efad3d2fea7dc6f975d696079f53f2168e5562/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.759472    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a044a48ae8c2784fee2167b1197218593cba5945c91e775f5c20576fd39c140c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a044a48ae8c2784fee2167b1197218593cba5945c91e775f5c20576fd39c140c/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.761934    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c848e174585c2c32cbd4a13a0f68f82f2a4b9e341487febc742ac89361b661df/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c848e174585c2c32cbd4a13a0f68f82f2a4b9e341487febc742ac89361b661df/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.768074    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e67d0280a30f40b4d91e5698f4b3526f30518efa5c52aa71b40a52f977780660/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e67d0280a30f40b4d91e5698f4b3526f30518efa5c52aa71b40a52f977780660/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.786939    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/54680532c68a4339f7378fc22769543cc64411cd418c0e2a3643fb66704bb233/diff" to get inode usage: stat /var/lib/containers/storage/overlay/54680532c68a4339f7378fc22769543cc64411cd418c0e2a3643fb66704bb233/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.797583    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2415f40cd8d6af59759fbed4482e7feb6b307e0b61954bd956361e2b862d9702/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2415f40cd8d6af59759fbed4482e7feb6b307e0b61954bd956361e2b862d9702/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.805867    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e67d0280a30f40b4d91e5698f4b3526f30518efa5c52aa71b40a52f977780660/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e67d0280a30f40b4d91e5698f4b3526f30518efa5c52aa71b40a52f977780660/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.894952    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756941051894635896 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 03 23:10:51 addons-250903 kubelet[1497]: E0903 23:10:51.894986    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756941051894635896 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 03 23:11:01 addons-250903 kubelet[1497]: E0903 23:11:01.897597    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756941061897257942 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 03 23:11:01 addons-250903 kubelet[1497]: E0903 23:11:01.897641    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756941061897257942 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 03 23:11:11 addons-250903 kubelet[1497]: E0903 23:11:11.900579    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1756941071900260193 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 03 23:11:11 addons-250903 kubelet[1497]: E0903 23:11:11.900618    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1756941071900260193 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 03 23:11:13 addons-250903 kubelet[1497]: E0903 23:11:13.837075    1497 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8e6309874628490faa429db6658868bde23a66513888d3c40fb77b23c7f9c7f1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8e6309874628490faa429db6658868bde23a66513888d3c40fb77b23c7f9c7f1/diff: no such file or directory, extraDiskErr: <nil>
	Sep 03 23:11:15 addons-250903 kubelet[1497]: I0903 23:11:15.345692    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbn95\" (UniqueName: \"kubernetes.io/projected/e4db5924-9a2e-4040-8f7a-dcb80ca82f10-kube-api-access-gbn95\") pod \"hello-world-app-5d498dc89-mbj6d\" (UID: \"e4db5924-9a2e-4040-8f7a-dcb80ca82f10\") " pod="default/hello-world-app-5d498dc89-mbj6d"
	Sep 03 23:11:15 addons-250903 kubelet[1497]: W0903 23:11:15.655568    1497 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/08963ffff7f68776dd205a9988c5b154764c48b8e613645bb5c03470d15885fa/crio-d2bbdd34dfc5b831fc6d6db170c08394e9fbacbc9e475c892828786b2c5e73d1 WatchSource:0}: Error finding container d2bbdd34dfc5b831fc6d6db170c08394e9fbacbc9e475c892828786b2c5e73d1: Status 404 returned error can't find the container with id d2bbdd34dfc5b831fc6d6db170c08394e9fbacbc9e475c892828786b2c5e73d1
	
	
	==> storage-provisioner [ed4615eebb12e495818701bf451a25426a204092ebcfefb12de42da426914cc6] <==
	W0903 23:10:51.909625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:10:53.913366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:10:53.917934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:10:55.920904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:10:55.925363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:10:57.928302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:10:57.934926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:10:59.937530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:10:59.942765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:01.945405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:01.949687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:03.953519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:03.958151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:05.961519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:05.966086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:07.969610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:07.973945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:09.977502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:09.983607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:11.986906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:11.991711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:13.995429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:14.003474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:16.008848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0903 23:11:16.024966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-250903 -n addons-250903
helpers_test.go:269: (dbg) Run:  kubectl --context addons-250903 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-pp4fx ingress-nginx-admission-patch-6rp9f
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-250903 describe pod ingress-nginx-admission-create-pp4fx ingress-nginx-admission-patch-6rp9f
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-250903 describe pod ingress-nginx-admission-create-pp4fx ingress-nginx-admission-patch-6rp9f: exit status 1 (113.090343ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pp4fx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6rp9f" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-250903 describe pod ingress-nginx-admission-create-pp4fx ingress-nginx-admission-patch-6rp9f: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-250903 addons disable ingress --alsologtostderr -v=1: (7.802043195s)
--- FAIL: TestAddons/parallel/Ingress (155.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-062474 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-062474 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-m2qxp" [23c544ce-47ac-4214-8e55-0e56b64650f3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-062474 -n functional-062474
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-03 23:24:49.218067743 +0000 UTC m=+1201.560633762
functional_test.go:1645: (dbg) Run:  kubectl --context functional-062474 describe po hello-node-connect-7d85dfc575-m2qxp -n default
functional_test.go:1645: (dbg) kubectl --context functional-062474 describe po hello-node-connect-7d85dfc575-m2qxp -n default:
Name:             hello-node-connect-7d85dfc575-m2qxp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-062474/192.168.49.2
Start Time:       Wed, 03 Sep 2025 23:14:48 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7sknf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7sknf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-m2qxp to functional-062474
Normal   Pulling    6m58s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m58s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-062474 logs hello-node-connect-7d85dfc575-m2qxp -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-062474 logs hello-node-connect-7d85dfc575-m2qxp -n default: exit status 1 (95.826507ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-m2qxp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-062474 logs hello-node-connect-7d85dfc575-m2qxp -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-062474 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-m2qxp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-062474/192.168.49.2
Start Time:       Wed, 03 Sep 2025 23:14:48 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7sknf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7sknf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-m2qxp to functional-062474
Normal   Pulling    6m58s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m58s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-062474 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-062474 logs -l app=hello-node-connect: exit status 1 (91.617375ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-m2qxp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-062474 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-062474 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.96.101
IPs:                      10.96.96.101
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30215/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-062474
helpers_test.go:243: (dbg) docker inspect functional-062474:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2f33348302f12cf04905cbbbc440b27806cb5aabe4efd87eedb473e046b5c621",
	        "Created": "2025-09-03T23:12:30.756698742Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315459,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-03T23:12:30.832931911Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ebcae716971f7c51ed3fd14f6fe4cc79c434c2b1abdabc67816f3649f4bf0002",
	        "ResolvConfPath": "/var/lib/docker/containers/2f33348302f12cf04905cbbbc440b27806cb5aabe4efd87eedb473e046b5c621/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2f33348302f12cf04905cbbbc440b27806cb5aabe4efd87eedb473e046b5c621/hostname",
	        "HostsPath": "/var/lib/docker/containers/2f33348302f12cf04905cbbbc440b27806cb5aabe4efd87eedb473e046b5c621/hosts",
	        "LogPath": "/var/lib/docker/containers/2f33348302f12cf04905cbbbc440b27806cb5aabe4efd87eedb473e046b5c621/2f33348302f12cf04905cbbbc440b27806cb5aabe4efd87eedb473e046b5c621-json.log",
	        "Name": "/functional-062474",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-062474:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-062474",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2f33348302f12cf04905cbbbc440b27806cb5aabe4efd87eedb473e046b5c621",
	                "LowerDir": "/var/lib/docker/overlay2/15c7a535920fbd8432a2af34ee054fed17e1a9ffd683d555cfcccb6dc21019c0-init/diff:/var/lib/docker/overlay2/cfed3f2232112709c4ba7d89bdbefe61b3142a45fe30ee6468d5e0113ef24166/diff",
	                "MergedDir": "/var/lib/docker/overlay2/15c7a535920fbd8432a2af34ee054fed17e1a9ffd683d555cfcccb6dc21019c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/15c7a535920fbd8432a2af34ee054fed17e1a9ffd683d555cfcccb6dc21019c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/15c7a535920fbd8432a2af34ee054fed17e1a9ffd683d555cfcccb6dc21019c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-062474",
	                "Source": "/var/lib/docker/volumes/functional-062474/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-062474",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-062474",
	                "name.minikube.sigs.k8s.io": "functional-062474",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bc68fba21d5527d34224da6961cb61b1855f62dbf9cdc62b7e66eb4081bff814",
	            "SandboxKey": "/var/run/docker/netns/bc68fba21d55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-062474": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:0f:72:99:f8:7a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "00fe5d331afe723385507ae5da77bde4f4032dbc6046ab982409803724dcd5dc",
	                    "EndpointID": "b55021171ae6cbfcb2c769ed7be1c24a4a9be527a36cda5ef8533c113c4d24c0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-062474",
	                        "2f33348302f1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-062474 -n functional-062474
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-062474 logs -n 25: (1.763253508s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-062474 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:13 UTC │ 03 Sep 25 23:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.36.0 │ 03 Sep 25 23:13 UTC │ 03 Sep 25 23:13 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.36.0 │ 03 Sep 25 23:13 UTC │ 03 Sep 25 23:13 UTC │
	│ kubectl │ functional-062474 kubectl -- --context functional-062474 get pods                                                          │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:13 UTC │ 03 Sep 25 23:13 UTC │
	│ start   │ -p functional-062474 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:13 UTC │ 03 Sep 25 23:14 UTC │
	│ service │ invalid-svc -p functional-062474                                                                                           │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │                     │
	│ config  │ functional-062474 config unset cpus                                                                                        │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	│ cp      │ functional-062474 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	│ config  │ functional-062474 config get cpus                                                                                          │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │                     │
	│ config  │ functional-062474 config set cpus 2                                                                                        │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	│ config  │ functional-062474 config get cpus                                                                                          │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	│ config  │ functional-062474 config unset cpus                                                                                        │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	│ ssh     │ functional-062474 ssh -n functional-062474 sudo cat /home/docker/cp-test.txt                                               │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	│ config  │ functional-062474 config get cpus                                                                                          │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │                     │
	│ ssh     │ functional-062474 ssh echo hello                                                                                           │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	│ cp      │ functional-062474 cp functional-062474:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1657253566/001/cp-test.txt │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	│ ssh     │ functional-062474 ssh cat /etc/hostname                                                                                    │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	│ ssh     │ functional-062474 ssh -n functional-062474 sudo cat /home/docker/cp-test.txt                                               │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	│ tunnel  │ functional-062474 tunnel --alsologtostderr                                                                                 │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │                     │
	│ tunnel  │ functional-062474 tunnel --alsologtostderr                                                                                 │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │                     │
	│ cp      │ functional-062474 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	│ ssh     │ functional-062474 ssh -n functional-062474 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	│ tunnel  │ functional-062474 tunnel --alsologtostderr                                                                                 │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │                     │
	│ addons  │ functional-062474 addons list                                                                                              │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	│ addons  │ functional-062474 addons list -o json                                                                                      │ functional-062474 │ jenkins │ v1.36.0 │ 03 Sep 25 23:14 UTC │ 03 Sep 25 23:14 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:13:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:13:52.616618  320181 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:13:52.616719  320181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:13:52.616723  320181 out.go:374] Setting ErrFile to fd 2...
	I0903 23:13:52.616727  320181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:13:52.617007  320181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
	I0903 23:13:52.617359  320181 out.go:368] Setting JSON to false
	I0903 23:13:52.618220  320181 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6983,"bootTime":1756934250,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0903 23:13:52.618279  320181 start.go:140] virtualization:  
	I0903 23:13:52.621798  320181 out.go:179] * [functional-062474] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0903 23:13:52.625723  320181 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:13:52.625827  320181 notify.go:220] Checking for updates...
	I0903 23:13:52.631559  320181 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:13:52.634510  320181 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig
	I0903 23:13:52.637390  320181 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube
	I0903 23:13:52.640220  320181 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0903 23:13:52.643105  320181 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:13:52.646588  320181 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:13:52.646761  320181 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:13:52.680035  320181 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0903 23:13:52.680150  320181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0903 23:13:52.744531  320181 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-03 23:13:52.734403752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0903 23:13:52.744630  320181 docker.go:318] overlay module found
	I0903 23:13:52.748414  320181 out.go:179] * Using the docker driver based on existing profile
	I0903 23:13:52.751217  320181 start.go:304] selected driver: docker
	I0903 23:13:52.751226  320181 start.go:918] validating driver "docker" against &{Name:functional-062474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-062474 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:13:52.751330  320181 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:13:52.751435  320181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0903 23:13:52.838942  320181 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-03 23:13:52.828726251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0903 23:13:52.839359  320181 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0903 23:13:52.839377  320181 cni.go:84] Creating CNI manager for ""
	I0903 23:13:52.839428  320181 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0903 23:13:52.839471  320181 start.go:348] cluster config:
	{Name:functional-062474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-062474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:13:52.843581  320181 out.go:179] * Starting "functional-062474" primary control-plane node in "functional-062474" cluster
	I0903 23:13:52.846470  320181 cache.go:123] Beginning downloading kic base image for docker with crio
	I0903 23:13:52.849388  320181 out.go:179] * Pulling base image v0.0.47-1756116447-21413 ...
	I0903 23:13:52.852294  320181 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:13:52.852342  320181 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-295927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0903 23:13:52.852358  320181 cache.go:58] Caching tarball of preloaded images
	I0903 23:13:52.852412  320181 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon
	I0903 23:13:52.852447  320181 preload.go:172] Found /home/jenkins/minikube-integration/21341-295927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0903 23:13:52.852455  320181 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0903 23:13:52.852560  320181 profile.go:143] Saving config to /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/config.json ...
	I0903 23:13:52.873056  320181 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon, skipping pull
	I0903 23:13:52.873067  320181 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 exists in daemon, skipping load
	I0903 23:13:52.873087  320181 cache.go:232] Successfully downloaded all kic artifacts
	I0903 23:13:52.873109  320181 start.go:360] acquireMachinesLock for functional-062474: {Name:mka20a007e56005e87e1119220071190b6a81db4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0903 23:13:52.873176  320181 start.go:364] duration metric: took 51.192µs to acquireMachinesLock for "functional-062474"
	I0903 23:13:52.873197  320181 start.go:96] Skipping create...Using existing machine configuration
	I0903 23:13:52.873201  320181 fix.go:54] fixHost starting: 
	I0903 23:13:52.873477  320181 cli_runner.go:164] Run: docker container inspect functional-062474 --format={{.State.Status}}
	I0903 23:13:52.890490  320181 fix.go:112] recreateIfNeeded on functional-062474: state=Running err=<nil>
	W0903 23:13:52.890519  320181 fix.go:138] unexpected machine state, will restart: <nil>
	I0903 23:13:52.893758  320181 out.go:252] * Updating the running docker "functional-062474" container ...
	I0903 23:13:52.893797  320181 machine.go:93] provisionDockerMachine start ...
	I0903 23:13:52.893874  320181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
	I0903 23:13:52.913566  320181 main.go:141] libmachine: Using SSH client type: native
	I0903 23:13:52.913877  320181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e9ef0] 0x3ec6b0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0903 23:13:52.913884  320181 main.go:141] libmachine: About to run SSH command:
	hostname
	I0903 23:13:53.039932  320181 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-062474
	
	I0903 23:13:53.039946  320181 ubuntu.go:182] provisioning hostname "functional-062474"
	I0903 23:13:53.040022  320181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
	I0903 23:13:53.058632  320181 main.go:141] libmachine: Using SSH client type: native
	I0903 23:13:53.058941  320181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e9ef0] 0x3ec6b0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0903 23:13:53.058950  320181 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-062474 && echo "functional-062474" | sudo tee /etc/hostname
	I0903 23:13:53.195351  320181 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-062474
	
	I0903 23:13:53.195423  320181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
	I0903 23:13:53.214141  320181 main.go:141] libmachine: Using SSH client type: native
	I0903 23:13:53.214450  320181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e9ef0] 0x3ec6b0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0903 23:13:53.214465  320181 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-062474' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-062474/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-062474' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0903 23:13:53.344093  320181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0903 23:13:53.344111  320181 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21341-295927/.minikube CaCertPath:/home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21341-295927/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21341-295927/.minikube}
	I0903 23:13:53.344137  320181 ubuntu.go:190] setting up certificates
	I0903 23:13:53.344153  320181 provision.go:84] configureAuth start
	I0903 23:13:53.344225  320181 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-062474
	I0903 23:13:53.362628  320181 provision.go:143] copyHostCerts
	I0903 23:13:53.362703  320181 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-295927/.minikube/cert.pem, removing ...
	I0903 23:13:53.362718  320181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-295927/.minikube/cert.pem
	I0903 23:13:53.362794  320181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21341-295927/.minikube/cert.pem (1123 bytes)
	I0903 23:13:53.362888  320181 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-295927/.minikube/key.pem, removing ...
	I0903 23:13:53.362892  320181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-295927/.minikube/key.pem
	I0903 23:13:53.362917  320181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21341-295927/.minikube/key.pem (1675 bytes)
	I0903 23:13:53.362966  320181 exec_runner.go:144] found /home/jenkins/minikube-integration/21341-295927/.minikube/ca.pem, removing ...
	I0903 23:13:53.362969  320181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21341-295927/.minikube/ca.pem
	I0903 23:13:53.362990  320181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21341-295927/.minikube/ca.pem (1082 bytes)
	I0903 23:13:53.363038  320181 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21341-295927/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca-key.pem org=jenkins.functional-062474 san=[127.0.0.1 192.168.49.2 functional-062474 localhost minikube]
	I0903 23:13:53.962774  320181 provision.go:177] copyRemoteCerts
	I0903 23:13:53.962829  320181 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0903 23:13:53.962879  320181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
	I0903 23:13:53.979873  320181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/functional-062474/id_rsa Username:docker}
	I0903 23:13:54.085031  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0903 23:13:54.111947  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0903 23:13:54.138553  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0903 23:13:54.163092  320181 provision.go:87] duration metric: took 818.914699ms to configureAuth
	I0903 23:13:54.163110  320181 ubuntu.go:206] setting minikube options for container-runtime
	I0903 23:13:54.163311  320181 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:13:54.163408  320181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
	I0903 23:13:54.181027  320181 main.go:141] libmachine: Using SSH client type: native
	I0903 23:13:54.181339  320181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e9ef0] 0x3ec6b0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I0903 23:13:54.181351  320181 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0903 23:13:59.586791  320181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0903 23:13:59.586806  320181 machine.go:96] duration metric: took 6.693001522s to provisionDockerMachine
	I0903 23:13:59.586816  320181 start.go:293] postStartSetup for "functional-062474" (driver="docker")
	I0903 23:13:59.586826  320181 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0903 23:13:59.586888  320181 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0903 23:13:59.586934  320181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
	I0903 23:13:59.605083  320181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/functional-062474/id_rsa Username:docker}
	I0903 23:13:59.696850  320181 ssh_runner.go:195] Run: cat /etc/os-release
	I0903 23:13:59.700094  320181 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0903 23:13:59.700121  320181 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0903 23:13:59.700129  320181 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0903 23:13:59.700135  320181 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0903 23:13:59.700144  320181 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-295927/.minikube/addons for local assets ...
	I0903 23:13:59.700199  320181 filesync.go:126] Scanning /home/jenkins/minikube-integration/21341-295927/.minikube/files for local assets ...
	I0903 23:13:59.700277  320181 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-295927/.minikube/files/etc/ssl/certs/2977892.pem -> 2977892.pem in /etc/ssl/certs
	I0903 23:13:59.700354  320181 filesync.go:149] local asset: /home/jenkins/minikube-integration/21341-295927/.minikube/files/etc/test/nested/copy/297789/hosts -> hosts in /etc/test/nested/copy/297789
	I0903 23:13:59.700397  320181 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/297789
	I0903 23:13:59.709114  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/files/etc/ssl/certs/2977892.pem --> /etc/ssl/certs/2977892.pem (1708 bytes)
	I0903 23:13:59.734010  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/files/etc/test/nested/copy/297789/hosts --> /etc/test/nested/copy/297789/hosts (40 bytes)
	I0903 23:13:59.758118  320181 start.go:296] duration metric: took 171.28659ms for postStartSetup
	I0903 23:13:59.758209  320181 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0903 23:13:59.758252  320181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
	I0903 23:13:59.775772  320181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/functional-062474/id_rsa Username:docker}
	I0903 23:13:59.860907  320181 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0903 23:13:59.865759  320181 fix.go:56] duration metric: took 6.992549353s for fixHost
	I0903 23:13:59.865775  320181 start.go:83] releasing machines lock for "functional-062474", held for 6.992591192s
	I0903 23:13:59.865853  320181 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-062474
	I0903 23:13:59.882737  320181 ssh_runner.go:195] Run: cat /version.json
	I0903 23:13:59.882757  320181 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0903 23:13:59.882780  320181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
	I0903 23:13:59.882819  320181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
	I0903 23:13:59.900800  320181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/functional-062474/id_rsa Username:docker}
	I0903 23:13:59.902317  320181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/functional-062474/id_rsa Username:docker}
	I0903 23:13:59.987176  320181 ssh_runner.go:195] Run: systemctl --version
	I0903 23:14:00.247963  320181 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0903 23:14:00.456580  320181 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0903 23:14:00.463359  320181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:14:00.479999  320181 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0903 23:14:00.480108  320181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0903 23:14:00.491469  320181 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0903 23:14:00.491486  320181 start.go:495] detecting cgroup driver to use...
	I0903 23:14:00.491525  320181 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0903 23:14:00.491581  320181 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0903 23:14:00.507885  320181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0903 23:14:00.523637  320181 docker.go:218] disabling cri-docker service (if available) ...
	I0903 23:14:00.523724  320181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0903 23:14:00.540019  320181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0903 23:14:00.555454  320181 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0903 23:14:00.692876  320181 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0903 23:14:00.817854  320181 docker.go:234] disabling docker service ...
	I0903 23:14:00.817922  320181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0903 23:14:00.832034  320181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0903 23:14:00.844810  320181 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0903 23:14:00.964904  320181 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0903 23:14:01.093248  320181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0903 23:14:01.105058  320181 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0903 23:14:01.123229  320181 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0903 23:14:01.123313  320181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:14:01.135133  320181 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0903 23:14:01.135215  320181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:14:01.147756  320181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:14:01.158882  320181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:14:01.170272  320181 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0903 23:14:01.180890  320181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:14:01.192337  320181 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:14:01.203032  320181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0903 23:14:01.213470  320181 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0903 23:14:01.222292  320181 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0903 23:14:01.231385  320181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:14:01.355197  320181 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0903 23:14:01.555019  320181 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0903 23:14:01.555102  320181 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0903 23:14:01.558985  320181 start.go:563] Will wait 60s for crictl version
	I0903 23:14:01.559043  320181 ssh_runner.go:195] Run: which crictl
	I0903 23:14:01.562533  320181 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0903 23:14:01.601034  320181 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0903 23:14:01.601154  320181 ssh_runner.go:195] Run: crio --version
	I0903 23:14:01.642689  320181 ssh_runner.go:195] Run: crio --version
	I0903 23:14:01.687272  320181 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0903 23:14:01.690265  320181 cli_runner.go:164] Run: docker network inspect functional-062474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0903 23:14:01.707179  320181 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0903 23:14:01.714441  320181 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0903 23:14:01.717327  320181 kubeadm.go:875] updating cluster {Name:functional-062474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-062474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0903 23:14:01.717469  320181 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:14:01.717546  320181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:14:01.766767  320181 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:14:01.766779  320181 crio.go:433] Images already preloaded, skipping extraction
	I0903 23:14:01.766846  320181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0903 23:14:01.806127  320181 crio.go:514] all images are preloaded for cri-o runtime.
	I0903 23:14:01.806140  320181 cache_images.go:85] Images are preloaded, skipping loading
	I0903 23:14:01.806146  320181 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0903 23:14:01.806259  320181 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-062474 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-062474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0903 23:14:01.806346  320181 ssh_runner.go:195] Run: crio config
	I0903 23:14:01.858809  320181 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0903 23:14:01.858843  320181 cni.go:84] Creating CNI manager for ""
	I0903 23:14:01.858852  320181 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0903 23:14:01.858859  320181 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0903 23:14:01.858880  320181 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-062474 NodeName:functional-062474 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0903 23:14:01.859147  320181 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-062474"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0903 23:14:01.859228  320181 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0903 23:14:01.868490  320181 binaries.go:44] Found k8s binaries, skipping transfer
	I0903 23:14:01.868555  320181 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0903 23:14:01.877867  320181 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0903 23:14:01.897079  320181 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0903 23:14:01.916427  320181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I0903 23:14:01.937839  320181 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0903 23:14:01.941809  320181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0903 23:14:02.064519  320181 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0903 23:14:02.076813  320181 certs.go:68] Setting up /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474 for IP: 192.168.49.2
	I0903 23:14:02.076824  320181 certs.go:194] generating shared ca certs ...
	I0903 23:14:02.076854  320181 certs.go:226] acquiring lock for ca certs: {Name:mk7e6b174a793881e5001fc4d8e7ec5b846a7bc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0903 23:14:02.076989  320181 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21341-295927/.minikube/ca.key
	I0903 23:14:02.077025  320181 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21341-295927/.minikube/proxy-client-ca.key
	I0903 23:14:02.077032  320181 certs.go:256] generating profile certs ...
	I0903 23:14:02.077106  320181 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.key
	I0903 23:14:02.077156  320181 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/apiserver.key.9ff89385
	I0903 23:14:02.077198  320181 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/proxy-client.key
	I0903 23:14:02.077314  320181 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/297789.pem (1338 bytes)
	W0903 23:14:02.077339  320181 certs.go:480] ignoring /home/jenkins/minikube-integration/21341-295927/.minikube/certs/297789_empty.pem, impossibly tiny 0 bytes
	I0903 23:14:02.077345  320181 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca-key.pem (1675 bytes)
	I0903 23:14:02.077373  320181 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/ca.pem (1082 bytes)
	I0903 23:14:02.077395  320181 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/cert.pem (1123 bytes)
	I0903 23:14:02.077417  320181 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-295927/.minikube/certs/key.pem (1675 bytes)
	I0903 23:14:02.077460  320181 certs.go:484] found cert: /home/jenkins/minikube-integration/21341-295927/.minikube/files/etc/ssl/certs/2977892.pem (1708 bytes)
	I0903 23:14:02.078062  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0903 23:14:02.102684  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0903 23:14:02.127489  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0903 23:14:02.152575  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0903 23:14:02.178646  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0903 23:14:02.204122  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0903 23:14:02.229018  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0903 23:14:02.253939  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0903 23:14:02.279913  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/files/etc/ssl/certs/2977892.pem --> /usr/share/ca-certificates/2977892.pem (1708 bytes)
	I0903 23:14:02.305045  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0903 23:14:02.330884  320181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21341-295927/.minikube/certs/297789.pem --> /usr/share/ca-certificates/297789.pem (1338 bytes)
	I0903 23:14:02.355630  320181 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0903 23:14:02.374741  320181 ssh_runner.go:195] Run: openssl version
	I0903 23:14:02.380488  320181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2977892.pem && ln -fs /usr/share/ca-certificates/2977892.pem /etc/ssl/certs/2977892.pem"
	I0903 23:14:02.390458  320181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2977892.pem
	I0903 23:14:02.394141  320181 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  3 23:12 /usr/share/ca-certificates/2977892.pem
	I0903 23:14:02.394213  320181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2977892.pem
	I0903 23:14:02.401068  320181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2977892.pem /etc/ssl/certs/3ec20f2e.0"
	I0903 23:14:02.409753  320181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0903 23:14:02.419276  320181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:14:02.423023  320181 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  3 23:05 /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:14:02.423092  320181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0903 23:14:02.430649  320181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0903 23:14:02.439796  320181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/297789.pem && ln -fs /usr/share/ca-certificates/297789.pem /etc/ssl/certs/297789.pem"
	I0903 23:14:02.449520  320181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/297789.pem
	I0903 23:14:02.453118  320181 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  3 23:12 /usr/share/ca-certificates/297789.pem
	I0903 23:14:02.453173  320181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/297789.pem
	I0903 23:14:02.460017  320181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/297789.pem /etc/ssl/certs/51391683.0"
	I0903 23:14:02.469437  320181 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0903 23:14:02.473251  320181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0903 23:14:02.480341  320181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0903 23:14:02.487446  320181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0903 23:14:02.494519  320181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0903 23:14:02.501528  320181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0903 23:14:02.508720  320181 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0903 23:14:02.515706  320181 kubeadm.go:392] StartCluster: {Name:functional-062474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-062474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:14:02.515785  320181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0903 23:14:02.515858  320181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0903 23:14:02.554018  320181 cri.go:89] found id: "1e4e1888c51446a0c41fd8ede899f7bd7c76093a1a66338adc7938c53ec7c311"
	I0903 23:14:02.554029  320181 cri.go:89] found id: "dc1ff37f4ed04a05c73432bf2407d64f2a23f1f3abce72e4268357c06c981b6e"
	I0903 23:14:02.554033  320181 cri.go:89] found id: "5e7f29510b65b808c042d09848524ca3f1b2e8120aa71e1adb77e99db49244c6"
	I0903 23:14:02.554038  320181 cri.go:89] found id: "f8fdf59ff4a505ea8151f1e92582e73d727fd98fcce6f7ebcb76d2278ca6d2ae"
	I0903 23:14:02.554040  320181 cri.go:89] found id: "dc10019aca0c54ee3b9ef75228b03a87f8ef7e9ffd5721f7f17db0f6a0ae3f8a"
	I0903 23:14:02.554043  320181 cri.go:89] found id: "b2817cd193ff11b74000a02cdfb5619b9990b9607a0bcf1661979d0bd6ac93ad"
	I0903 23:14:02.554045  320181 cri.go:89] found id: "2e603814c0273ffae4befe6ab2cdf3230930560f0c2a1715b3e9012d7b914615"
	I0903 23:14:02.554047  320181 cri.go:89] found id: "da3a54fa4189242f9a5e73adf03d5e45ec5f9c693a6cd90ae7c16983f2a3c6ae"
	I0903 23:14:02.554049  320181 cri.go:89] found id: "ed5df81acf451d2172e0da788c0e62ccf6a4716a594ba1110458ff5b185bbde8"
	I0903 23:14:02.554056  320181 cri.go:89] found id: "ec52d7d4c20977bd3ae2a832071d8a9110b8ec883f67e0077276ed282dcba6ea"
	I0903 23:14:02.554058  320181 cri.go:89] found id: "e7151fcf8a1cd0e8cd54fa33a629f5200707a7317c1f1e9a30cd4d3f6c614649"
	I0903 23:14:02.554070  320181 cri.go:89] found id: "8a9f24a476af29446938c95a86fbd7b0fb1cfe6b829b44f7fb2a8a297d178601"
	I0903 23:14:02.554072  320181 cri.go:89] found id: "b91caadcb8e6849b5a9d524b600f715884ab2feda82153efb9f09a9b10bb5737"
	I0903 23:14:02.554074  320181 cri.go:89] found id: "9b7346c9d4efe15303ad756c21ed2dd09770993661a9386bbea6334108caa234"
	I0903 23:14:02.554076  320181 cri.go:89] found id: "d934b9279ba251730b402a3c308c80b2cb6e3a9b47844e3f80366c31e637c604"
	I0903 23:14:02.554081  320181 cri.go:89] found id: "5ea6feadeb5b5e126bd451801737c865c42f64db9a8d722ce75d4146dd1c516b"
	I0903 23:14:02.554083  320181 cri.go:89] found id: ""
	I0903 23:14:02.554135  320181 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-062474 -n functional-062474
helpers_test.go:269: (dbg) Run:  kubectl --context functional-062474 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-8kbzs hello-node-connect-7d85dfc575-m2qxp
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-062474 describe pod hello-node-75c85bcc94-8kbzs hello-node-connect-7d85dfc575-m2qxp
helpers_test.go:290: (dbg) kubectl --context functional-062474 describe pod hello-node-75c85bcc94-8kbzs hello-node-connect-7d85dfc575-m2qxp:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-8kbzs
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-062474/192.168.49.2
	Start Time:       Wed, 03 Sep 2025 23:15:04 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4wt9c (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-4wt9c:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m48s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8kbzs to functional-062474
	  Normal   Pulling    6m52s (x5 over 9m48s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m52s (x5 over 9m48s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m52s (x5 over 9m48s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m43s (x21 over 9m48s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m43s (x21 over 9m48s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-m2qxp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-062474/192.168.49.2
	Start Time:       Wed, 03 Sep 2025 23:14:48 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7sknf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7sknf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-m2qxp to functional-062474
	  Normal   Pulling    7m1s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m1s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m1s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    0s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     0s (x43 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (601.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-062474 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-062474 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-8kbzs" [67825237-3c66-4c8b-9bc8-187b321fbee8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0903 23:15:45.987144  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:18:02.119945  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:18:29.828612  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:23:02.119142  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-062474 -n functional-062474
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-03 23:25:04.726580621 +0000 UTC m=+1217.069146574
functional_test.go:1460: (dbg) Run:  kubectl --context functional-062474 describe po hello-node-75c85bcc94-8kbzs -n default
functional_test.go:1460: (dbg) kubectl --context functional-062474 describe po hello-node-75c85bcc94-8kbzs -n default:
Name:             hello-node-75c85bcc94-8kbzs
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-062474/192.168.49.2
Start Time:       Wed, 03 Sep 2025 23:15:04 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4wt9c (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-4wt9c:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8kbzs to functional-062474
Normal   Pulling    7m5s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m5s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m56s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m56s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-062474 logs hello-node-75c85bcc94-8kbzs -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-062474 logs hello-node-75c85bcc94-8kbzs -n default: exit status 1 (167.370635ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-8kbzs" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-062474 logs hello-node-75c85bcc94-8kbzs -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (601.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062474 service --namespace=default --https --url hello-node: exit status 115 (550.380464ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30396
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-062474 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062474 service hello-node --url --format={{.IP}}: exit status 115 (562.801661ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-062474 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062474 service hello-node --url: exit status 115 (523.320542ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30396
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-062474 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30396
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    

Test pass (294/332)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.57
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 8.14
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.09
18 TestDownloadOnly/v1.34.0/DeleteAll 0.23
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 175.31
31 TestAddons/serial/GCPAuth/Namespaces 0.25
32 TestAddons/serial/GCPAuth/FakeCredentials 10.89
35 TestAddons/parallel/Registry 17.79
36 TestAddons/parallel/RegistryCreds 0.79
38 TestAddons/parallel/InspektorGadget 6.61
39 TestAddons/parallel/MetricsServer 5.92
41 TestAddons/parallel/CSI 47.06
42 TestAddons/parallel/Headlamp 17.07
43 TestAddons/parallel/CloudSpanner 5.56
44 TestAddons/parallel/LocalPath 9.36
45 TestAddons/parallel/NvidiaDevicePlugin 6.51
46 TestAddons/parallel/Yakd 11.75
48 TestAddons/StoppedEnableDisable 12.25
49 TestCertOptions 34.81
50 TestCertExpiration 243.66
52 TestForceSystemdFlag 41.51
53 TestForceSystemdEnv 44.81
59 TestErrorSpam/setup 28.22
60 TestErrorSpam/start 0.81
61 TestErrorSpam/status 1.05
62 TestErrorSpam/pause 1.76
63 TestErrorSpam/unpause 1.97
64 TestErrorSpam/stop 1.47
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 48.34
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 30.32
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.93
76 TestFunctional/serial/CacheCmd/cache/add_local 1.45
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.07
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.04
81 TestFunctional/serial/CacheCmd/cache/delete 0.14
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 35.01
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.73
87 TestFunctional/serial/LogsFileCmd 1.82
88 TestFunctional/serial/InvalidService 4.48
90 TestFunctional/parallel/ConfigCmd 0.5
91 TestFunctional/parallel/DashboardCmd 9.07
92 TestFunctional/parallel/DryRun 0.63
93 TestFunctional/parallel/InternationalLanguage 0.25
94 TestFunctional/parallel/StatusCmd 1.35
99 TestFunctional/parallel/AddonsCmd 0.21
100 TestFunctional/parallel/PersistentVolumeClaim 25.82
102 TestFunctional/parallel/SSHCmd 0.73
103 TestFunctional/parallel/CpCmd 2.36
105 TestFunctional/parallel/FileSync 0.36
106 TestFunctional/parallel/CertSync 2.03
110 TestFunctional/parallel/NodeLabels 0.12
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
114 TestFunctional/parallel/License 0.36
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.47
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.14
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
128 TestFunctional/parallel/ProfileCmd/profile_list 0.44
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
130 TestFunctional/parallel/MountCmd/any-port 9
131 TestFunctional/parallel/MountCmd/specific-port 1.77
132 TestFunctional/parallel/MountCmd/VerifyCleanup 2.66
133 TestFunctional/parallel/ServiceCmd/List 0.55
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
138 TestFunctional/parallel/Version/short 0.09
139 TestFunctional/parallel/Version/components 1.38
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
145 TestFunctional/parallel/ImageCommands/Setup 0.71
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.23
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.2
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.3
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.63
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.96
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 211.07
164 TestMultiControlPlane/serial/DeployApp 8.81
165 TestMultiControlPlane/serial/PingHostFromPods 1.61
166 TestMultiControlPlane/serial/AddWorkerNode 59.64
167 TestMultiControlPlane/serial/NodeLabels 0.13
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.99
169 TestMultiControlPlane/serial/CopyFile 19.25
170 TestMultiControlPlane/serial/StopSecondaryNode 12.75
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
172 TestMultiControlPlane/serial/RestartSecondaryNode 32.59
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.09
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 133.52
175 TestMultiControlPlane/serial/DeleteSecondaryNode 12.33
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
177 TestMultiControlPlane/serial/StopCluster 35.79
178 TestMultiControlPlane/serial/RestartCluster 80.49
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.75
180 TestMultiControlPlane/serial/AddSecondaryNode 81.01
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.01
185 TestJSONOutput/start/Command 81.33
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.76
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.66
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.93
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.35
210 TestKicCustomNetwork/create_custom_network 41.84
211 TestKicCustomNetwork/use_default_bridge_network 33.92
212 TestKicExistingNetwork 36.09
213 TestKicCustomSubnet 33.58
214 TestKicStaticIP 34.3
215 TestMainNoArgs 0.13
216 TestMinikubeProfile 71.48
219 TestMountStart/serial/StartWithMountFirst 9.62
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 6.3
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.63
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.55
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 140.7
231 TestMultiNode/serial/DeployApp2Nodes 6.45
232 TestMultiNode/serial/PingHostFrom2Pods 0.98
233 TestMultiNode/serial/AddNode 54.23
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.65
236 TestMultiNode/serial/CopyFile 10.07
237 TestMultiNode/serial/StopNode 2.5
238 TestMultiNode/serial/StartAfterStop 8.06
239 TestMultiNode/serial/RestartKeepsNodes 76.69
240 TestMultiNode/serial/DeleteNode 5.36
241 TestMultiNode/serial/StopMultiNode 23.84
242 TestMultiNode/serial/RestartMultiNode 49.37
243 TestMultiNode/serial/ValidateNameConflict 34.62
248 TestPreload 141.52
250 TestScheduledStopUnix 104.58
253 TestInsufficientStorage 10.46
254 TestRunningBinaryUpgrade 79.43
256 TestKubernetesUpgrade 383.75
257 TestMissingContainerUpgrade 173.95
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.13
260 TestNoKubernetes/serial/StartWithK8s 39.12
261 TestNoKubernetes/serial/StartWithStopK8s 9.76
262 TestNoKubernetes/serial/Start 8.18
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
264 TestNoKubernetes/serial/ProfileList 1.44
265 TestNoKubernetes/serial/Stop 1.21
266 TestNoKubernetes/serial/StartNoArgs 7.3
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
268 TestStoppedBinaryUpgrade/Setup 0.92
269 TestStoppedBinaryUpgrade/Upgrade 73.66
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.47
279 TestPause/serial/Start 81.9
280 TestPause/serial/SecondStartNoReconfiguration 43.48
281 TestPause/serial/Pause 0.86
282 TestPause/serial/VerifyStatus 0.34
283 TestPause/serial/Unpause 1.05
284 TestPause/serial/PauseAgain 1.5
285 TestPause/serial/DeletePaused 2.74
286 TestPause/serial/VerifyDeletedResources 0.53
294 TestNetworkPlugins/group/false 4.79
299 TestStartStop/group/old-k8s-version/serial/FirstStart 149.79
300 TestStartStop/group/old-k8s-version/serial/DeployApp 10.52
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.09
302 TestStartStop/group/old-k8s-version/serial/Stop 12.18
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
304 TestStartStop/group/old-k8s-version/serial/SecondStart 108.37
306 TestStartStop/group/no-preload/serial/FirstStart 79.19
307 TestStartStop/group/no-preload/serial/DeployApp 11.38
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.16
309 TestStartStop/group/no-preload/serial/Stop 12.03
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/no-preload/serial/SecondStart 58.27
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.17
314 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.43
315 TestStartStop/group/old-k8s-version/serial/Pause 4.45
317 TestStartStop/group/embed-certs/serial/FirstStart 84.21
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
321 TestStartStop/group/no-preload/serial/Pause 3.09
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.5
324 TestStartStop/group/embed-certs/serial/DeployApp 10.44
325 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.27
326 TestStartStop/group/embed-certs/serial/Stop 11.97
327 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
328 TestStartStop/group/embed-certs/serial/SecondStart 55.28
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
331 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.95
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.54
334 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
335 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.15
336 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.33
337 TestStartStop/group/embed-certs/serial/Pause 3.95
339 TestStartStop/group/newest-cni/serial/FirstStart 41.32
340 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
343 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.13
344 TestStartStop/group/newest-cni/serial/Stop 1.24
345 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
346 TestStartStop/group/newest-cni/serial/SecondStart 20.75
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.24
349 TestNetworkPlugins/group/auto/Start 82.91
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.48
353 TestStartStop/group/newest-cni/serial/Pause 4.02
354 TestNetworkPlugins/group/kindnet/Start 55.47
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
357 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
358 TestNetworkPlugins/group/auto/KubeletFlags 0.32
359 TestNetworkPlugins/group/auto/NetCatPod 10.31
360 TestNetworkPlugins/group/kindnet/DNS 0.22
361 TestNetworkPlugins/group/kindnet/Localhost 0.21
362 TestNetworkPlugins/group/kindnet/HairPin 0.17
363 TestNetworkPlugins/group/auto/DNS 0.21
364 TestNetworkPlugins/group/auto/Localhost 0.22
365 TestNetworkPlugins/group/auto/HairPin 0.2
366 TestNetworkPlugins/group/calico/Start 73.39
367 TestNetworkPlugins/group/custom-flannel/Start 54.1
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.35
370 TestNetworkPlugins/group/calico/ControllerPod 6.01
371 TestNetworkPlugins/group/custom-flannel/DNS 0.21
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
374 TestNetworkPlugins/group/calico/KubeletFlags 0.29
375 TestNetworkPlugins/group/calico/NetCatPod 13.29
376 TestNetworkPlugins/group/calico/DNS 0.27
377 TestNetworkPlugins/group/calico/Localhost 0.23
378 TestNetworkPlugins/group/calico/HairPin 0.25
379 TestNetworkPlugins/group/enable-default-cni/Start 88.64
380 TestNetworkPlugins/group/flannel/Start 66.35
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
385 TestNetworkPlugins/group/flannel/NetCatPod 10.28
386 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
387 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
388 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
389 TestNetworkPlugins/group/flannel/DNS 0.27
390 TestNetworkPlugins/group/flannel/Localhost 0.19
391 TestNetworkPlugins/group/flannel/HairPin 0.23
392 TestNetworkPlugins/group/bridge/Start 71.93
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
394 TestNetworkPlugins/group/bridge/NetCatPod 9.28
395 TestNetworkPlugins/group/bridge/DNS 0.17
396 TestNetworkPlugins/group/bridge/Localhost 0.15
397 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (7.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-135830 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-135830 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.574002289s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0903 23:04:55.274561  297789 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0903 23:04:55.274646  297789 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-295927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-135830
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-135830: exit status 85 (92.595667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-135830 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-135830 │ jenkins │ v1.36.0 │ 03 Sep 25 23:04 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:04:47
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:04:47.743159  297795 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:04:47.743277  297795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:04:47.743288  297795 out.go:374] Setting ErrFile to fd 2...
	I0903 23:04:47.743294  297795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:04:47.743554  297795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
	W0903 23:04:47.743718  297795 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21341-295927/.minikube/config/config.json: open /home/jenkins/minikube-integration/21341-295927/.minikube/config/config.json: no such file or directory
	I0903 23:04:47.744159  297795 out.go:368] Setting JSON to true
	I0903 23:04:47.744949  297795 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6438,"bootTime":1756934250,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0903 23:04:47.745020  297795 start.go:140] virtualization:  
	I0903 23:04:47.749297  297795 out.go:99] [download-only-135830] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	W0903 23:04:47.749535  297795 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21341-295927/.minikube/cache/preloaded-tarball: no such file or directory
	I0903 23:04:47.749611  297795 notify.go:220] Checking for updates...
	I0903 23:04:47.752500  297795 out.go:171] MINIKUBE_LOCATION=21341
	I0903 23:04:47.755579  297795 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:04:47.758665  297795 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig
	I0903 23:04:47.761566  297795 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube
	I0903 23:04:47.764490  297795 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0903 23:04:47.770065  297795 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0903 23:04:47.770336  297795 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:04:47.801416  297795 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0903 23:04:47.801535  297795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0903 23:04:47.860241  297795 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-03 23:04:47.850428647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0903 23:04:47.860369  297795 docker.go:318] overlay module found
	I0903 23:04:47.863437  297795 out.go:99] Using the docker driver based on user configuration
	I0903 23:04:47.863486  297795 start.go:304] selected driver: docker
	I0903 23:04:47.863501  297795 start.go:918] validating driver "docker" against <nil>
	I0903 23:04:47.863623  297795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0903 23:04:47.922669  297795 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-03 23:04:47.913640323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0903 23:04:47.922826  297795 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 23:04:47.923142  297795 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0903 23:04:47.923305  297795 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0903 23:04:47.926466  297795 out.go:171] Using Docker driver with root privileges
	I0903 23:04:47.929436  297795 cni.go:84] Creating CNI manager for ""
	I0903 23:04:47.929510  297795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0903 23:04:47.929525  297795 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0903 23:04:47.929608  297795 start.go:348] cluster config:
	{Name:download-only-135830 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-135830 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:04:47.932644  297795 out.go:99] Starting "download-only-135830" primary control-plane node in "download-only-135830" cluster
	I0903 23:04:47.932679  297795 cache.go:123] Beginning downloading kic base image for docker with crio
	I0903 23:04:47.935534  297795 out.go:99] Pulling base image v0.0.47-1756116447-21413 ...
	I0903 23:04:47.935569  297795 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:04:47.935708  297795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon
	I0903 23:04:47.951318  297795 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 to local cache
	I0903 23:04:47.951504  297795 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local cache directory
	I0903 23:04:47.951605  297795 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 to local cache
	I0903 23:04:47.989778  297795 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I0903 23:04:47.989807  297795 cache.go:58] Caching tarball of preloaded images
	I0903 23:04:47.990571  297795 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0903 23:04:47.993832  297795 out.go:99] Downloading Kubernetes v1.20.0 preload ...
	I0903 23:04:47.993861  297795 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I0903 23:04:48.082401  297795 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/21341-295927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-135830 host does not exist
	  To start a cluster, run: "minikube start -p download-only-135830"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-135830
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (8.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-111413 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-111413 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.14024667s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (8.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0903 23:05:03.860669  297789 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0903 23:05:03.860710  297789 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21341-295927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-111413
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-111413: exit status 85 (92.067357ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-135830 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-135830 │ jenkins │ v1.36.0 │ 03 Sep 25 23:04 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.36.0 │ 03 Sep 25 23:04 UTC │ 03 Sep 25 23:04 UTC │
	│ delete  │ -p download-only-135830                                                                                                                                                   │ download-only-135830 │ jenkins │ v1.36.0 │ 03 Sep 25 23:04 UTC │ 03 Sep 25 23:04 UTC │
	│ start   │ -o=json --download-only -p download-only-111413 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-111413 │ jenkins │ v1.36.0 │ 03 Sep 25 23:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/03 23:04:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0903 23:04:55.773620  297993 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:04:55.773733  297993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:04:55.773744  297993 out.go:374] Setting ErrFile to fd 2...
	I0903 23:04:55.773750  297993 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:04:55.773991  297993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
	I0903 23:04:55.774378  297993 out.go:368] Setting JSON to true
	I0903 23:04:55.775174  297993 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6446,"bootTime":1756934250,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0903 23:04:55.775244  297993 start.go:140] virtualization:  
	I0903 23:04:55.778699  297993 out.go:99] [download-only-111413] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0903 23:04:55.778975  297993 notify.go:220] Checking for updates...
	I0903 23:04:55.781941  297993 out.go:171] MINIKUBE_LOCATION=21341
	I0903 23:04:55.784855  297993 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:04:55.787923  297993 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig
	I0903 23:04:55.790871  297993 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube
	I0903 23:04:55.793782  297993 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0903 23:04:55.799330  297993 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0903 23:04:55.799585  297993 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:04:55.823918  297993 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0903 23:04:55.824048  297993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0903 23:04:55.884885  297993 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-03 23:04:55.875914576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0903 23:04:55.884991  297993 docker.go:318] overlay module found
	I0903 23:04:55.887932  297993 out.go:99] Using the docker driver based on user configuration
	I0903 23:04:55.887957  297993 start.go:304] selected driver: docker
	I0903 23:04:55.887970  297993 start.go:918] validating driver "docker" against <nil>
	I0903 23:04:55.888095  297993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0903 23:04:55.951501  297993 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-03 23:04:55.942461851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0903 23:04:55.951734  297993 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0903 23:04:55.952029  297993 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0903 23:04:55.952199  297993 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0903 23:04:55.955216  297993 out.go:171] Using Docker driver with root privileges
	I0903 23:04:55.957900  297993 cni.go:84] Creating CNI manager for ""
	I0903 23:04:55.957975  297993 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0903 23:04:55.957988  297993 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0903 23:04:55.958085  297993 start.go:348] cluster config:
	{Name:download-only-111413 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-111413 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:04:55.961089  297993 out.go:99] Starting "download-only-111413" primary control-plane node in "download-only-111413" cluster
	I0903 23:04:55.961114  297993 cache.go:123] Beginning downloading kic base image for docker with crio
	I0903 23:04:55.964013  297993 out.go:99] Pulling base image v0.0.47-1756116447-21413 ...
	I0903 23:04:55.964043  297993 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:04:55.964150  297993 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local docker daemon
	I0903 23:04:55.980247  297993 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 to local cache
	I0903 23:04:55.980384  297993 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local cache directory
	I0903 23:04:55.980407  297993 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 in local cache directory, skipping pull
	I0903 23:04:55.980416  297993 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 exists in cache, skipping pull
	I0903 23:04:55.980424  297993 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 as a tarball
	I0903 23:04:56.027235  297993 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0903 23:04:56.027263  297993 cache.go:58] Caching tarball of preloaded images
	I0903 23:04:56.028091  297993 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0903 23:04:56.031081  297993 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0903 23:04:56.031118  297993 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0903 23:04:56.121302  297993 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:36555bb244eebf6e383c5e8810b48b3a -> /home/jenkins/minikube-integration/21341-295927/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-111413 host does not exist
	  To start a cluster, run: "minikube start -p download-only-111413"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-111413
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I0903 23:05:05.243278  297789 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-819907 --alsologtostderr --binary-mirror http://127.0.0.1:44565 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-819907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-819907
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-250903
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-250903: exit status 85 (78.395317ms)

                                                
                                                
-- stdout --
	* Profile "addons-250903" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-250903"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-250903
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-250903: exit status 85 (80.294938ms)

                                                
                                                
-- stdout --
	* Profile "addons-250903" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-250903"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (175.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-250903 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-250903 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m55.311769189s)
--- PASS: TestAddons/Setup (175.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-250903 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-250903 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-250903 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-250903 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [eec91944-9d65-4e9f-86e3-589672fa7cd0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [eec91944-9d65-4e9f-86e3-589672fa7cd0] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.005048198s
addons_test.go:694: (dbg) Run:  kubectl --context addons-250903 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-250903 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-250903 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-250903 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 14.465991ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-qrfkz" [c35fb6f1-289a-424d-978e-8267be95ebd9] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003954022s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-xk2h9" [f969f46b-d8ef-4f8e-b119-afb8070f1e80] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003010253s
addons_test.go:392: (dbg) Run:  kubectl --context addons-250903 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-250903 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-250903 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.562386315s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 ip
2025/09/03 23:08:38 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.79s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.79s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.215737ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-250903
addons_test.go:332: (dbg) Run:  kubectl --context addons-250903 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.79s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-vfnbb" [df20efa1-60fa-4d7e-97c5-d264752dfb5d] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003913894s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.61s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.298998ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-568v4" [412ee54d-941e-468e-870a-7a1c62c864e7] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006615839s
addons_test.go:463: (dbg) Run:  kubectl --context addons-250903 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (47.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0903 23:08:38.282276  297789 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0903 23:08:38.287185  297789 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0903 23:08:38.287211  297789 kapi.go:107] duration metric: took 5.867949ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.878322ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-250903 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-250903 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c1a6a959-0757-4729-9e02-524303a0af1b] Pending
helpers_test.go:352: "task-pv-pod" [c1a6a959-0757-4729-9e02-524303a0af1b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c1a6a959-0757-4729-9e02-524303a0af1b] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003869193s
addons_test.go:572: (dbg) Run:  kubectl --context addons-250903 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-250903 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-250903 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-250903 delete pod task-pv-pod: (1.134965126s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-250903 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-250903 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-250903 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [dfd9040b-8ce1-4a07-98ad-07fcb9b12527] Pending
helpers_test.go:352: "task-pv-pod-restore" [dfd9040b-8ce1-4a07-98ad-07fcb9b12527] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [dfd9040b-8ce1-4a07-98ad-07fcb9b12527] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002763965s
addons_test.go:614: (dbg) Run:  kubectl --context addons-250903 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-250903 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-250903 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-250903 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.882018404s)
--- PASS: TestAddons/parallel/CSI (47.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-250903 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6f46646d79-z76dd" [0d2e2c18-1a65-4e5d-b954-f71763f0d36f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6f46646d79-z76dd" [0d2e2c18-1a65-4e5d-b954-f71763f0d36f] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003523639s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-250903 addons disable headlamp --alsologtostderr -v=1: (6.07652497s)
--- PASS: TestAddons/parallel/Headlamp (17.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-92rnt" [42480e93-8f9a-4a06-a4f3-f0042f9b4ba8] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004490745s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-250903 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-250903 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-250903 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [6c5d8b69-7254-4e14-a38f-68e4dcc5f84d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [6c5d8b69-7254-4e14-a38f-68e4dcc5f84d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [6c5d8b69-7254-4e14-a38f-68e4dcc5f84d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004102013s
addons_test.go:967: (dbg) Run:  kubectl --context addons-250903 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 ssh "cat /opt/local-path-provisioner/pvc-869b6e37-b5c3-43b8-a231-bd6f94d647a1_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-250903 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-250903 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-vzwcd" [413a92e7-6b19-427c-b018-fa6ead381c02] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003843783s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-5pvgz" [300a6cc4-db46-4475-b457-37f8ab5712b1] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003022547s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-250903 addons disable yakd --alsologtostderr -v=1: (5.747658227s)
--- PASS: TestAddons/parallel/Yakd (11.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-250903
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-250903: (11.963241867s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-250903
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-250903
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-250903
--- PASS: TestAddons/StoppedEnableDisable (12.25s)

                                                
                                    
x
+
TestCertOptions (34.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-594463 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-594463 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.156587116s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-594463 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-594463 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-594463 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-594463" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-594463
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-594463: (1.973211519s)
--- PASS: TestCertOptions (34.81s)

                                                
                                    
x
+
TestCertExpiration (243.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-498958 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
E0904 00:03:02.119084  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-498958 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.297904784s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-498958 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-498958 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.684989542s)
helpers_test.go:175: Cleaning up "cert-expiration-498958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-498958
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-498958: (2.675455129s)
--- PASS: TestCertExpiration (243.66s)

                                                
                                    
x
+
TestForceSystemdFlag (41.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-293997 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-293997 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.441033334s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-293997 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-293997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-293997
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-293997: (2.672999566s)
--- PASS: TestForceSystemdFlag (41.51s)

                                                
                                    
x
+
TestForceSystemdEnv (44.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-480800 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0904 00:02:40.683742  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:02:45.194826  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-480800 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.993446177s)
helpers_test.go:175: Cleaning up "force-systemd-env-480800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-480800
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-480800: (2.818296273s)
--- PASS: TestForceSystemdEnv (44.81s)

                                                
                                    
x
+
TestErrorSpam/setup (28.22s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-119092 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-119092 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-119092 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-119092 --driver=docker  --container-runtime=crio: (28.214286056s)
--- PASS: TestErrorSpam/setup (28.22s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 pause
--- PASS: TestErrorSpam/pause (1.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.97s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 unpause
--- PASS: TestErrorSpam/unpause (1.97s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 stop: (1.257135665s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-119092 --log_dir /tmp/nospam-119092 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21341-295927/.minikube/files/etc/test/nested/copy/297789/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.34s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-062474 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0903 23:13:02.124820  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:13:02.132038  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:13:02.143494  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:13:02.164953  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:13:02.206329  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:13:02.287793  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:13:02.449397  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:13:02.770760  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:13:03.412901  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:13:04.694364  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:13:07.257053  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:13:12.379334  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-062474 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (48.336363501s)
--- PASS: TestFunctional/serial/StartWithProxy (48.34s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0903 23:13:13.833046  297789 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-062474 --alsologtostderr -v=8
E0903 23:13:22.621315  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:13:43.103441  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-062474 --alsologtostderr -v=8: (30.309810004s)
functional_test.go:678: soft start took 30.318693818s for "functional-062474" cluster.
I0903 23:13:44.143215  297789 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (30.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-062474 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-062474 cache add registry.k8s.io/pause:3.1: (1.367164004s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-062474 cache add registry.k8s.io/pause:3.3: (1.305144337s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-062474 cache add registry.k8s.io/pause:latest: (1.253279905s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-062474 /tmp/TestFunctionalserialCacheCmdcacheadd_local2736381862/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 cache add minikube-local-cache-test:functional-062474
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 cache delete minikube-local-cache-test:functional-062474
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-062474
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062474 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (288.429305ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-062474 cache reload: (1.108777943s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 kubectl -- --context functional-062474 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-062474 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-062474 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0903 23:14:24.065045  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-062474 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.001189876s)
functional_test.go:776: restart took 35.001492371s for "functional-062474" cluster.
I0903 23:14:27.575502  297789 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (35.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-062474 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-062474 logs: (1.729300119s)
--- PASS: TestFunctional/serial/LogsCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 logs --file /tmp/TestFunctionalserialLogsFileCmd2065829784/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-062474 logs --file /tmp/TestFunctionalserialLogsFileCmd2065829784/001/logs.txt: (1.819034599s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.82s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-062474 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-062474
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-062474: exit status 115 (743.408111ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32407 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-062474 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.48s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062474 config get cpus: exit status 14 (76.694692ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062474 config get cpus: exit status 14 (68.189963ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-062474 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-062474 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 327190: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-062474 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-062474 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (271.203434ms)

                                                
                                                
-- stdout --
	* [functional-062474] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:25:08.709081  326664 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:25:08.709270  326664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:25:08.709331  326664 out.go:374] Setting ErrFile to fd 2...
	I0903 23:25:08.709350  326664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:25:08.709637  326664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
	I0903 23:25:08.710031  326664 out.go:368] Setting JSON to false
	I0903 23:25:08.710992  326664 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7659,"bootTime":1756934250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0903 23:25:08.711102  326664 start.go:140] virtualization:  
	I0903 23:25:08.714208  326664 out.go:179] * [functional-062474] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0903 23:25:08.719879  326664 notify.go:220] Checking for updates...
	I0903 23:25:08.723694  326664 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:25:08.727859  326664 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:25:08.730788  326664 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig
	I0903 23:25:08.733944  326664 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube
	I0903 23:25:08.736878  326664 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0903 23:25:08.739878  326664 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:25:08.744521  326664 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:25:08.745898  326664 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:25:08.785657  326664 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0903 23:25:08.785793  326664 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0903 23:25:08.878445  326664 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-03 23:25:08.868668737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0903 23:25:08.878561  326664 docker.go:318] overlay module found
	I0903 23:25:08.881797  326664 out.go:179] * Using the docker driver based on existing profile
	I0903 23:25:08.884184  326664 start.go:304] selected driver: docker
	I0903 23:25:08.884203  326664 start.go:918] validating driver "docker" against &{Name:functional-062474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-062474 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:25:08.884308  326664 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:25:08.887954  326664 out.go:203] 
	W0903 23:25:08.895869  326664 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0903 23:25:08.898769  326664 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-062474 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-062474 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-062474 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (244.777261ms)

                                                
                                                
-- stdout --
	* [functional-062474] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:25:08.459609  326588 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:25:08.461479  326588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:25:08.461494  326588 out.go:374] Setting ErrFile to fd 2...
	I0903 23:25:08.461506  326588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:25:08.464060  326588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
	I0903 23:25:08.464589  326588 out.go:368] Setting JSON to false
	I0903 23:25:08.465729  326588 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7659,"bootTime":1756934250,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0903 23:25:08.465811  326588 start.go:140] virtualization:  
	I0903 23:25:08.469370  326588 out.go:179] * [functional-062474] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	I0903 23:25:08.473232  326588 out.go:179]   - MINIKUBE_LOCATION=21341
	I0903 23:25:08.473504  326588 notify.go:220] Checking for updates...
	I0903 23:25:08.481261  326588 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0903 23:25:08.484366  326588 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig
	I0903 23:25:08.487357  326588 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube
	I0903 23:25:08.490640  326588 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0903 23:25:08.493639  326588 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0903 23:25:08.497191  326588 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:25:08.497828  326588 driver.go:421] Setting default libvirt URI to qemu:///system
	I0903 23:25:08.533521  326588 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0903 23:25:08.533680  326588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0903 23:25:08.608971  326588 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-03 23:25:08.598776716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0903 23:25:08.609080  326588 docker.go:318] overlay module found
	I0903 23:25:08.612249  326588 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0903 23:25:08.615241  326588 start.go:304] selected driver: docker
	I0903 23:25:08.615279  326588 start.go:918] validating driver "docker" against &{Name:functional-062474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756116447-21413@sha256:0420dcb4b989a4f3e21680d5952b2239e0fcff16c7f6520d036ddb10d7c257d9 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-062474 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0903 23:25:08.615399  326588 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0903 23:25:08.619210  326588 out.go:203] 
	W0903 23:25:08.622149  326588 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0903 23:25:08.625694  326588 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [9e182b8c-781b-4951-b03c-743a61c679ec] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00473937s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-062474 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-062474 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-062474 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-062474 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6405ca9e-f07b-436b-88e6-0c8505878469] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [6405ca9e-f07b-436b-88e6-0c8505878469] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004102594s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-062474 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-062474 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-062474 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6698fcaf-f105-4dd9-be4f-d8ea3fe55fdb] Pending
helpers_test.go:352: "sp-pod" [6698fcaf-f105-4dd9-be4f-d8ea3fe55fdb] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003429541s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-062474 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.82s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh -n functional-062474 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 cp functional-062474:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1657253566/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh -n functional-062474 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh -n functional-062474 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/297789/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "sudo cat /etc/test/nested/copy/297789/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/297789.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "sudo cat /etc/ssl/certs/297789.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/297789.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "sudo cat /usr/share/ca-certificates/297789.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2977892.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "sudo cat /etc/ssl/certs/2977892.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2977892.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "sudo cat /usr/share/ca-certificates/2977892.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-062474 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062474 ssh "sudo systemctl is-active docker": exit status 1 (335.541428ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062474 ssh "sudo systemctl is-active containerd": exit status 1 (330.74292ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-062474 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-062474 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-062474 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-062474 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 322731: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-062474 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-062474 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [f8bd9524-e67e-4c43-88b2-64181e691c50] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [f8bd9524-e67e-4c43-88b2-64181e691c50] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003262013s
I0903 23:14:48.085650  297789 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-062474 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.148.198 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-062474 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "376.965883ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "58.643171ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "344.34942ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "62.65865ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-062474 /tmp/TestFunctionalparallelMountCmdany-port3720772748/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1756941893604549658" to /tmp/TestFunctionalparallelMountCmdany-port3720772748/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1756941893604549658" to /tmp/TestFunctionalparallelMountCmdany-port3720772748/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1756941893604549658" to /tmp/TestFunctionalparallelMountCmdany-port3720772748/001/test-1756941893604549658
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062474 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (325.255414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0903 23:24:53.932011  297789 retry.go:31] will retry after 649.658507ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  3 23:24 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  3 23:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  3 23:24 test-1756941893604549658
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh cat /mount-9p/test-1756941893604549658
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-062474 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [007d678f-eeab-4783-bd16-c02ff836cabe] Pending
helpers_test.go:352: "busybox-mount" [007d678f-eeab-4783-bd16-c02ff836cabe] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [007d678f-eeab-4783-bd16-c02ff836cabe] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [007d678f-eeab-4783-bd16-c02ff836cabe] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003552713s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-062474 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-062474 /tmp/TestFunctionalparallelMountCmdany-port3720772748/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-062474 /tmp/TestFunctionalparallelMountCmdspecific-port3582957172/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062474 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.430057ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0903 23:25:02.966813  297789 retry.go:31] will retry after 379.018042ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-062474 /tmp/TestFunctionalparallelMountCmdspecific-port3582957172/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062474 ssh "sudo umount -f /mount-9p": exit status 1 (292.411212ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-062474 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-062474 /tmp/TestFunctionalparallelMountCmdspecific-port3582957172/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-062474 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3874675020/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-062474 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3874675020/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-062474 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3874675020/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062474 ssh "findmnt -T" /mount1: exit status 1 (828.625993ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0903 23:25:05.205008  297789 retry.go:31] will retry after 666.668834ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-062474 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-062474 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3874675020/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-062474 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3874675020/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-062474 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3874675020/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 service list -o json
functional_test.go:1504: Took "542.219164ms" to run "out/minikube-linux-arm64 -p functional-062474 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-062474 version -o=json --components: (1.379090219s)
--- PASS: TestFunctional/parallel/Version/components (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-062474 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-062474
localhost/kicbase/echo-server:functional-062474
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-062474 image ls --format short --alsologtostderr:
I0903 23:25:22.629822  328829 out.go:360] Setting OutFile to fd 1 ...
I0903 23:25:22.630425  328829 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 23:25:22.630470  328829 out.go:374] Setting ErrFile to fd 2...
I0903 23:25:22.630489  328829 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 23:25:22.630774  328829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
I0903 23:25:22.631813  328829 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 23:25:22.631980  328829 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 23:25:22.632471  328829 cli_runner.go:164] Run: docker container inspect functional-062474 --format={{.State.Status}}
I0903 23:25:22.674556  328829 ssh_runner.go:195] Run: systemctl --version
I0903 23:25:22.674613  328829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
I0903 23:25:22.701965  328829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/functional-062474/id_rsa Username:docker}
I0903 23:25:22.792219  328829 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-062474 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ localhost/minikube-local-cache-test     │ functional-062474  │ 47b87744254a4 │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ 996be7e86d9b3 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/library/nginx                 │ latest             │ 47ef8710c9f5a │ 202MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ d291939e99406 │ 84.8MB │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ 6fc32d66c1411 │ 75.9MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ alpine             │ 35f3cbee4fb77 │ 54.3MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ a25f5ef9c34c3 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ localhost/kicbase/echo-server           │ functional-062474  │ ce2d2cda2d858 │ 4.79MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-062474 image ls --format table --alsologtostderr:
I0903 23:25:23.365050  329043 out.go:360] Setting OutFile to fd 1 ...
I0903 23:25:23.365187  329043 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 23:25:23.365196  329043 out.go:374] Setting ErrFile to fd 2...
I0903 23:25:23.365208  329043 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 23:25:23.365657  329043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
I0903 23:25:23.366592  329043 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 23:25:23.367208  329043 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 23:25:23.367827  329043 cli_runner.go:164] Run: docker container inspect functional-062474 --format={{.State.Status}}
I0903 23:25:23.390560  329043 ssh_runner.go:195] Run: systemctl --version
I0903 23:25:23.390737  329043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
I0903 23:25:23.411907  329043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/functional-062474/id_rsa Username:docker}
I0903 23:25:23.516137  329043 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-062474 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"47ef8710c9f5a9276b3e347e3ab71ee44c8483e20f8636380ae2737ef4c27758","repoDigests":["docker.io/library/nginx@sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708","docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57"],"repoTags":["docker.io/library/nginx:latest"],"size":"202036629"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1f
d21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"72629077"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23e
ff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"75938
711"},{"id":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":["registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"51592021"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54348302"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["regi
stry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"84818927"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"a422e0e982356
f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-062474"],"size":"4788229"},{"id":"47b87744254a4c8c547eea0dcf89ecc8d549586f57cd284312574082e7a23e9e","repoDigests":["localhost/minikube-local-cache-test@sha256:31437ba5636ae39a82f36d3f9559e368f2d171398ac92121704ef3be4d4e43b0"],"repoTags":["localhost/minikube-local-cache-test:functional-062474"],"size":"3330"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f3
75a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-062474 image ls --format json --alsologtostderr:
I0903 23:25:23.089888  328960 out.go:360] Setting OutFile to fd 1 ...
I0903 23:25:23.090094  328960 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 23:25:23.090122  328960 out.go:374] Setting ErrFile to fd 2...
I0903 23:25:23.090143  328960 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 23:25:23.090444  328960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
I0903 23:25:23.091147  328960 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 23:25:23.091319  328960 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 23:25:23.091909  328960 cli_runner.go:164] Run: docker container inspect functional-062474 --format={{.State.Status}}
I0903 23:25:23.121852  328960 ssh_runner.go:195] Run: systemctl --version
I0903 23:25:23.121907  328960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
I0903 23:25:23.141658  328960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/functional-062474/id_rsa Username:docker}
I0903 23:25:23.236719  328960 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-062474 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac
repoTags:
- docker.io/library/nginx:alpine
size: "54348302"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-062474
size: "4788229"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "75938711"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 47b87744254a4c8c547eea0dcf89ecc8d549586f57cd284312574082e7a23e9e
repoDigests:
- localhost/minikube-local-cache-test@sha256:31437ba5636ae39a82f36d3f9559e368f2d171398ac92121704ef3be4d4e43b0
repoTags:
- localhost/minikube-local-cache-test:functional-062474
size: "3330"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "72629077"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 47ef8710c9f5a9276b3e347e3ab71ee44c8483e20f8636380ae2737ef4c27758
repoDigests:
- docker.io/library/nginx@sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
repoTags:
- docker.io/library/nginx:latest
size: "202036629"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "84818927"
- id: a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "51592021"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-062474 image ls --format yaml --alsologtostderr:
I0903 23:25:22.756086  328885 out.go:360] Setting OutFile to fd 1 ...
I0903 23:25:22.756276  328885 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 23:25:22.756306  328885 out.go:374] Setting ErrFile to fd 2...
I0903 23:25:22.756325  328885 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 23:25:22.756700  328885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
I0903 23:25:22.757849  328885 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 23:25:22.758079  328885 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 23:25:22.758886  328885 cli_runner.go:164] Run: docker container inspect functional-062474 --format={{.State.Status}}
I0903 23:25:22.775982  328885 ssh_runner.go:195] Run: systemctl --version
I0903 23:25:22.776042  328885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
I0903 23:25:22.804121  328885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/functional-062474/id_rsa Username:docker}
I0903 23:25:22.899712  328885 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-062474 ssh pgrep buildkitd: exit status 1 (308.385503ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image build -t localhost/my-image:functional-062474 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-062474 image build -t localhost/my-image:functional-062474 testdata/build --alsologtostderr: (3.390530567s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-062474 image build -t localhost/my-image:functional-062474 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> af57788d483
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-062474
--> 0f277ffb230
Successfully tagged localhost/my-image:functional-062474
0f277ffb2308a1885a37ebae0333e9499702e65b052d2b8d759d7d2cbc40db73
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-062474 image build -t localhost/my-image:functional-062474 testdata/build --alsologtostderr:
I0903 23:25:23.220170  329010 out.go:360] Setting OutFile to fd 1 ...
I0903 23:25:23.221105  329010 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 23:25:23.221130  329010 out.go:374] Setting ErrFile to fd 2...
I0903 23:25:23.221137  329010 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0903 23:25:23.221464  329010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
I0903 23:25:23.222159  329010 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 23:25:23.222733  329010 config.go:182] Loaded profile config "functional-062474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0903 23:25:23.223211  329010 cli_runner.go:164] Run: docker container inspect functional-062474 --format={{.State.Status}}
I0903 23:25:23.249008  329010 ssh_runner.go:195] Run: systemctl --version
I0903 23:25:23.249073  329010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-062474
I0903 23:25:23.277839  329010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/functional-062474/id_rsa Username:docker}
I0903 23:25:23.376149  329010 build_images.go:161] Building image from path: /tmp/build.568953375.tar
I0903 23:25:23.376227  329010 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0903 23:25:23.387172  329010 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.568953375.tar
I0903 23:25:23.391414  329010 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.568953375.tar: stat -c "%s %y" /var/lib/minikube/build/build.568953375.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.568953375.tar': No such file or directory
I0903 23:25:23.391444  329010 ssh_runner.go:362] scp /tmp/build.568953375.tar --> /var/lib/minikube/build/build.568953375.tar (3072 bytes)
I0903 23:25:23.422733  329010 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.568953375
I0903 23:25:23.435526  329010 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.568953375 -xf /var/lib/minikube/build/build.568953375.tar
I0903 23:25:23.457122  329010 crio.go:315] Building image: /var/lib/minikube/build/build.568953375
I0903 23:25:23.457200  329010 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-062474 /var/lib/minikube/build/build.568953375 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0903 23:25:26.525455  329010 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-062474 /var/lib/minikube/build/build.568953375 --cgroup-manager=cgroupfs: (3.068231405s)
I0903 23:25:26.525521  329010 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.568953375
I0903 23:25:26.534386  329010 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.568953375.tar
I0903 23:25:26.544587  329010 build_images.go:217] Built localhost/my-image:functional-062474 from /tmp/build.568953375.tar
I0903 23:25:26.544619  329010 build_images.go:133] succeeded building to: functional-062474
I0903 23:25:26.544625  329010 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-062474
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image load --daemon kicbase/echo-server:functional-062474 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-062474 image load --daemon kicbase/echo-server:functional-062474 --alsologtostderr: (2.955507878s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image load --daemon kicbase/echo-server:functional-062474 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
2025/09/03 23:25:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-062474
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image load --daemon kicbase/echo-server:functional-062474 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image save kicbase/echo-server:functional-062474 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image rm kicbase/echo-server:functional-062474 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-062474
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-062474 image save --daemon kicbase/echo-server:functional-062474 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-062474
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-062474
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-062474
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-062474
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (211.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0903 23:28:02.121656  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m29.982052942s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5: (1.090830466s)
--- PASS: TestMultiControlPlane/serial/StartCluster (211.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 kubectl -- rollout status deployment/busybox: (5.736130819s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-6p2wt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-b6bkk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-m5v8v -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-6p2wt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-b6bkk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-m5v8v -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-6p2wt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-b6bkk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-m5v8v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-6p2wt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-6p2wt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-b6bkk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-b6bkk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-m5v8v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 kubectl -- exec busybox-7b57f96db7-m5v8v -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 node add --alsologtostderr -v 5
E0903 23:29:25.190953  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:29:37.616436  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:29:37.622815  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:29:37.634508  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:29:37.656482  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:29:37.697877  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:29:37.779235  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:29:37.940680  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:29:38.262545  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:29:38.904553  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:29:40.186602  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:29:42.748548  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:29:47.870083  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:29:58.111432  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 node add --alsologtostderr -v 5: (58.627395155s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5: (1.017293623s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-009884 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp testdata/cp-test.txt ha-009884:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1283774631/001/cp-test_ha-009884.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884:/home/docker/cp-test.txt ha-009884-m02:/home/docker/cp-test_ha-009884_ha-009884-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m02 "sudo cat /home/docker/cp-test_ha-009884_ha-009884-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884:/home/docker/cp-test.txt ha-009884-m03:/home/docker/cp-test_ha-009884_ha-009884-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m03 "sudo cat /home/docker/cp-test_ha-009884_ha-009884-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884:/home/docker/cp-test.txt ha-009884-m04:/home/docker/cp-test_ha-009884_ha-009884-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m04 "sudo cat /home/docker/cp-test_ha-009884_ha-009884-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp testdata/cp-test.txt ha-009884-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1283774631/001/cp-test_ha-009884-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m02 "sudo cat /home/docker/cp-test.txt"
E0903 23:30:18.593729  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884-m02:/home/docker/cp-test.txt ha-009884:/home/docker/cp-test_ha-009884-m02_ha-009884.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884 "sudo cat /home/docker/cp-test_ha-009884-m02_ha-009884.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884-m02:/home/docker/cp-test.txt ha-009884-m03:/home/docker/cp-test_ha-009884-m02_ha-009884-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m03 "sudo cat /home/docker/cp-test_ha-009884-m02_ha-009884-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884-m02:/home/docker/cp-test.txt ha-009884-m04:/home/docker/cp-test_ha-009884-m02_ha-009884-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m04 "sudo cat /home/docker/cp-test_ha-009884-m02_ha-009884-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp testdata/cp-test.txt ha-009884-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1283774631/001/cp-test_ha-009884-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884-m03:/home/docker/cp-test.txt ha-009884:/home/docker/cp-test_ha-009884-m03_ha-009884.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884 "sudo cat /home/docker/cp-test_ha-009884-m03_ha-009884.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884-m03:/home/docker/cp-test.txt ha-009884-m02:/home/docker/cp-test_ha-009884-m03_ha-009884-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m02 "sudo cat /home/docker/cp-test_ha-009884-m03_ha-009884-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884-m03:/home/docker/cp-test.txt ha-009884-m04:/home/docker/cp-test_ha-009884-m03_ha-009884-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m04 "sudo cat /home/docker/cp-test_ha-009884-m03_ha-009884-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp testdata/cp-test.txt ha-009884-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1283774631/001/cp-test_ha-009884-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884-m04:/home/docker/cp-test.txt ha-009884:/home/docker/cp-test_ha-009884-m04_ha-009884.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884 "sudo cat /home/docker/cp-test_ha-009884-m04_ha-009884.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884-m04:/home/docker/cp-test.txt ha-009884-m02:/home/docker/cp-test_ha-009884-m04_ha-009884-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m02 "sudo cat /home/docker/cp-test_ha-009884-m04_ha-009884-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 cp ha-009884-m04:/home/docker/cp-test.txt ha-009884-m03:/home/docker/cp-test_ha-009884-m04_ha-009884-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 ssh -n ha-009884-m03 "sudo cat /home/docker/cp-test_ha-009884-m04_ha-009884-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 node stop m02 --alsologtostderr -v 5: (12.004397995s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5: exit status 7 (746.427325ms)

                                                
                                                
-- stdout --
	ha-009884
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-009884-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-009884-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-009884-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:30:43.137644  344879 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:30:43.137853  344879 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:30:43.137903  344879 out.go:374] Setting ErrFile to fd 2...
	I0903 23:30:43.137923  344879 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:30:43.138351  344879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
	I0903 23:30:43.138625  344879 out.go:368] Setting JSON to false
	I0903 23:30:43.138687  344879 mustload.go:65] Loading cluster: ha-009884
	I0903 23:30:43.139183  344879 notify.go:220] Checking for updates...
	I0903 23:30:43.139571  344879 config.go:182] Loaded profile config "ha-009884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:30:43.139608  344879 status.go:174] checking status of ha-009884 ...
	I0903 23:30:43.140227  344879 cli_runner.go:164] Run: docker container inspect ha-009884 --format={{.State.Status}}
	I0903 23:30:43.165983  344879 status.go:371] ha-009884 host status = "Running" (err=<nil>)
	I0903 23:30:43.166013  344879 host.go:66] Checking if "ha-009884" exists ...
	I0903 23:30:43.166392  344879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-009884
	I0903 23:30:43.201845  344879 host.go:66] Checking if "ha-009884" exists ...
	I0903 23:30:43.202299  344879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0903 23:30:43.202346  344879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-009884
	I0903 23:30:43.223925  344879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/ha-009884/id_rsa Username:docker}
	I0903 23:30:43.313495  344879 ssh_runner.go:195] Run: systemctl --version
	I0903 23:30:43.318507  344879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:30:43.332572  344879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0903 23:30:43.406425  344879 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-03 23:30:43.395813248 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0903 23:30:43.406991  344879 kubeconfig.go:125] found "ha-009884" server: "https://192.168.49.254:8443"
	I0903 23:30:43.407033  344879 api_server.go:166] Checking apiserver status ...
	I0903 23:30:43.407082  344879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:30:43.418679  344879 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1410/cgroup
	I0903 23:30:43.430167  344879 api_server.go:182] apiserver freezer: "12:freezer:/docker/11caf70816c9c7d8f1c8455c6851242964f91bbbc84555c893c7a4e60796daec/crio/crio-bf6eca05415c4dbf54538ca29651f119822c82fd6db3666ca0cf4c29d3ea4397"
	I0903 23:30:43.430245  344879 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/11caf70816c9c7d8f1c8455c6851242964f91bbbc84555c893c7a4e60796daec/crio/crio-bf6eca05415c4dbf54538ca29651f119822c82fd6db3666ca0cf4c29d3ea4397/freezer.state
	I0903 23:30:43.440968  344879 api_server.go:204] freezer state: "THAWED"
	I0903 23:30:43.440995  344879 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0903 23:30:43.450884  344879 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0903 23:30:43.450955  344879 status.go:463] ha-009884 apiserver status = Running (err=<nil>)
	I0903 23:30:43.450993  344879 status.go:176] ha-009884 status: &{Name:ha-009884 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 23:30:43.451019  344879 status.go:174] checking status of ha-009884-m02 ...
	I0903 23:30:43.451407  344879 cli_runner.go:164] Run: docker container inspect ha-009884-m02 --format={{.State.Status}}
	I0903 23:30:43.469682  344879 status.go:371] ha-009884-m02 host status = "Stopped" (err=<nil>)
	I0903 23:30:43.469708  344879 status.go:384] host is not running, skipping remaining checks
	I0903 23:30:43.469715  344879 status.go:176] ha-009884-m02 status: &{Name:ha-009884-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 23:30:43.469744  344879 status.go:174] checking status of ha-009884-m03 ...
	I0903 23:30:43.470092  344879 cli_runner.go:164] Run: docker container inspect ha-009884-m03 --format={{.State.Status}}
	I0903 23:30:43.494983  344879 status.go:371] ha-009884-m03 host status = "Running" (err=<nil>)
	I0903 23:30:43.495013  344879 host.go:66] Checking if "ha-009884-m03" exists ...
	I0903 23:30:43.495379  344879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-009884-m03
	I0903 23:30:43.512665  344879 host.go:66] Checking if "ha-009884-m03" exists ...
	I0903 23:30:43.513009  344879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0903 23:30:43.513061  344879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-009884-m03
	I0903 23:30:43.530903  344879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/ha-009884-m03/id_rsa Username:docker}
	I0903 23:30:43.621145  344879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:30:43.633529  344879 kubeconfig.go:125] found "ha-009884" server: "https://192.168.49.254:8443"
	I0903 23:30:43.633559  344879 api_server.go:166] Checking apiserver status ...
	I0903 23:30:43.633602  344879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:30:43.644150  344879 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1354/cgroup
	I0903 23:30:43.654112  344879 api_server.go:182] apiserver freezer: "12:freezer:/docker/d67703b05816651edd8f2a3b1125e26aabfe8484f90d7a54ca3b52858f211dbf/crio/crio-a96d2981fa125db57db0c8e48ad2aca752f13532c5f9ec8c95c220bd4846ba2b"
	I0903 23:30:43.654224  344879 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d67703b05816651edd8f2a3b1125e26aabfe8484f90d7a54ca3b52858f211dbf/crio/crio-a96d2981fa125db57db0c8e48ad2aca752f13532c5f9ec8c95c220bd4846ba2b/freezer.state
	I0903 23:30:43.663150  344879 api_server.go:204] freezer state: "THAWED"
	I0903 23:30:43.663179  344879 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0903 23:30:43.671603  344879 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0903 23:30:43.671633  344879 status.go:463] ha-009884-m03 apiserver status = Running (err=<nil>)
	I0903 23:30:43.671643  344879 status.go:176] ha-009884-m03 status: &{Name:ha-009884-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 23:30:43.671746  344879 status.go:174] checking status of ha-009884-m04 ...
	I0903 23:30:43.672080  344879 cli_runner.go:164] Run: docker container inspect ha-009884-m04 --format={{.State.Status}}
	I0903 23:30:43.688956  344879 status.go:371] ha-009884-m04 host status = "Running" (err=<nil>)
	I0903 23:30:43.688977  344879 host.go:66] Checking if "ha-009884-m04" exists ...
	I0903 23:30:43.689298  344879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-009884-m04
	I0903 23:30:43.706288  344879 host.go:66] Checking if "ha-009884-m04" exists ...
	I0903 23:30:43.706581  344879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0903 23:30:43.706619  344879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-009884-m04
	I0903 23:30:43.725316  344879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/ha-009884-m04/id_rsa Username:docker}
	I0903 23:30:43.812879  344879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:30:43.827979  344879 status.go:176] ha-009884-m04 status: &{Name:ha-009884-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 node start m02 --alsologtostderr -v 5
E0903 23:30:59.555744  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 node start m02 --alsologtostderr -v 5: (31.383313228s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5: (1.077644497s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.088162861s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (133.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 stop --alsologtostderr -v 5: (26.455618177s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 start --wait true --alsologtostderr -v 5
E0903 23:32:21.478807  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:33:02.123831  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 start --wait true --alsologtostderr -v 5: (1m46.902067596s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (133.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 node delete m03 --alsologtostderr -v 5: (11.356675363s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 stop --alsologtostderr -v 5: (35.68158322s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5: exit status 7 (108.395788ms)

                                                
                                                
-- stdout --
	ha-009884
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-009884-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-009884-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:34:20.592464  358676 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:34:20.592662  358676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:34:20.592689  358676 out.go:374] Setting ErrFile to fd 2...
	I0903 23:34:20.592707  358676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:34:20.593008  358676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
	I0903 23:34:20.593228  358676 out.go:368] Setting JSON to false
	I0903 23:34:20.593305  358676 mustload.go:65] Loading cluster: ha-009884
	I0903 23:34:20.593384  358676 notify.go:220] Checking for updates...
	I0903 23:34:20.594322  358676 config.go:182] Loaded profile config "ha-009884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:34:20.594379  358676 status.go:174] checking status of ha-009884 ...
	I0903 23:34:20.594930  358676 cli_runner.go:164] Run: docker container inspect ha-009884 --format={{.State.Status}}
	I0903 23:34:20.614458  358676 status.go:371] ha-009884 host status = "Stopped" (err=<nil>)
	I0903 23:34:20.614480  358676 status.go:384] host is not running, skipping remaining checks
	I0903 23:34:20.614487  358676 status.go:176] ha-009884 status: &{Name:ha-009884 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 23:34:20.614513  358676 status.go:174] checking status of ha-009884-m02 ...
	I0903 23:34:20.614815  358676 cli_runner.go:164] Run: docker container inspect ha-009884-m02 --format={{.State.Status}}
	I0903 23:34:20.637391  358676 status.go:371] ha-009884-m02 host status = "Stopped" (err=<nil>)
	I0903 23:34:20.637412  358676 status.go:384] host is not running, skipping remaining checks
	I0903 23:34:20.637419  358676 status.go:176] ha-009884-m02 status: &{Name:ha-009884-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 23:34:20.637438  358676 status.go:174] checking status of ha-009884-m04 ...
	I0903 23:34:20.637721  358676 cli_runner.go:164] Run: docker container inspect ha-009884-m04 --format={{.State.Status}}
	I0903 23:34:20.653798  358676 status.go:371] ha-009884-m04 host status = "Stopped" (err=<nil>)
	I0903 23:34:20.653819  358676 status.go:384] host is not running, skipping remaining checks
	I0903 23:34:20.653825  358676 status.go:176] ha-009884-m04 status: &{Name:ha-009884-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (80.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0903 23:34:37.616229  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:35:05.320274  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m19.522211318s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (80.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 node add --control-plane --alsologtostderr -v 5: (1m20.001977026s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-009884 status --alsologtostderr -v 5: (1.012333367s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.012719445s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.33s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-275547 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0903 23:38:02.120646  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-275547 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m21.322334643s)
--- PASS: TestJSONOutput/start/Command (81.33s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-275547 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-275547 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-275547 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-275547 --output=json --user=testUser: (5.929143246s)
--- PASS: TestJSONOutput/stop/Command (5.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.35s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-545624 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-545624 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (106.549683ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"001c88ec-d188-47e4-bc8d-35684314dfc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-545624] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8748a92f-84b0-40f5-9686-3ff507d4cae5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21341"}}
	{"specversion":"1.0","id":"ee9567e9-7b77-475a-819b-13e6100fbb37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"caa0192e-466f-4972-ad33-ae1375fdba1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig"}}
	{"specversion":"1.0","id":"277bc6dd-d331-4a35-b475-9dec84e10817","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube"}}
	{"specversion":"1.0","id":"82bf7f4c-0f6b-45b7-80ab-20e72f886149","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"86a0148f-e2c2-40e0-8241-8edbbf8eadec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5da967cc-174a-42eb-958e-f628fad03c84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-545624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-545624
--- PASS: TestErrorJSONOutput (0.35s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-966542 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-966542 --network=: (39.70393776s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-966542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-966542
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-966542: (2.118780769s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.84s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-089285 --network=bridge
E0903 23:39:37.619857  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-089285 --network=bridge: (31.455612898s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-089285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-089285
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-089285: (2.443469532s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.92s)

                                                
                                    
x
+
TestKicExistingNetwork (36.09s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0903 23:40:01.026039  297789 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0903 23:40:01.043429  297789 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0903 23:40:01.043512  297789 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0903 23:40:01.043530  297789 cli_runner.go:164] Run: docker network inspect existing-network
W0903 23:40:01.058881  297789 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0903 23:40:01.058911  297789 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0903 23:40:01.058928  297789 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0903 23:40:01.059037  297789 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0903 23:40:01.076928  297789 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-6656a71762a1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:b6:e0:1d:cf:f7:e8} reservation:<nil>}
I0903 23:40:01.077262  297789 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019bcf60}
I0903 23:40:01.077288  297789 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0903 23:40:01.077341  297789 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0903 23:40:01.144694  297789 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-720232 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-720232 --network=existing-network: (33.940316008s)
helpers_test.go:175: Cleaning up "existing-network-720232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-720232
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-720232: (1.997424275s)
I0903 23:40:37.100536  297789 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.09s)

                                                
                                    
x
+
TestKicCustomSubnet (33.58s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-684519 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-684519 --subnet=192.168.60.0/24: (31.394125196s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-684519 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-684519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-684519
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-684519: (2.157426357s)
--- PASS: TestKicCustomSubnet (33.58s)

                                                
                                    
x
+
TestKicStaticIP (34.3s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-779022 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-779022 --static-ip=192.168.200.200: (31.960766665s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-779022 ip
helpers_test.go:175: Cleaning up "static-ip-779022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-779022
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-779022: (2.182795165s)
--- PASS: TestKicStaticIP (34.30s)

                                                
                                    
x
+
TestMainNoArgs (0.13s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.13s)

                                                
                                    
x
+
TestMinikubeProfile (71.48s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-624377 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-624377 --driver=docker  --container-runtime=crio: (30.972351861s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-627717 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-627717 --driver=docker  --container-runtime=crio: (35.142843048s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-624377
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-627717
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-627717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-627717
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-627717: (2.002815621s)
helpers_test.go:175: Cleaning up "first-624377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-624377
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-624377: (1.939045487s)
--- PASS: TestMinikubeProfile (71.48s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-803733 --memory=3072 --mount-string /tmp/TestMountStartserial1958465569/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0903 23:43:02.120358  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-803733 --memory=3072 --mount-string /tmp/TestMountStartserial1958465569/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.61603701s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-803733 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-805661 --memory=3072 --mount-string /tmp/TestMountStartserial1958465569/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-805661 --memory=3072 --mount-string /tmp/TestMountStartserial1958465569/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.294092437s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-805661 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-803733 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-803733 --alsologtostderr -v=5: (1.628688938s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-805661 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-805661
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-805661: (1.244719854s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.55s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-805661
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-805661: (6.549737232s)
--- PASS: TestMountStart/serial/RestartStopped (7.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-805661 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (140.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-328811 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0903 23:44:37.615793  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-328811 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m20.153902487s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (140.70s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-328811 -- rollout status deployment/busybox: (4.555288728s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- exec busybox-7b57f96db7-knhcb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- exec busybox-7b57f96db7-r689c -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- exec busybox-7b57f96db7-knhcb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- exec busybox-7b57f96db7-r689c -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- exec busybox-7b57f96db7-knhcb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- exec busybox-7b57f96db7-r689c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.45s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- exec busybox-7b57f96db7-knhcb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- exec busybox-7b57f96db7-knhcb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- exec busybox-7b57f96db7-r689c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-328811 -- exec busybox-7b57f96db7-r689c -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-328811 -v=5 --alsologtostderr
E0903 23:46:00.681862  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0903 23:46:05.193062  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-328811 -v=5 --alsologtostderr: (53.562508097s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.23s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-328811 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 cp testdata/cp-test.txt multinode-328811:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 cp multinode-328811:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1284641700/001/cp-test_multinode-328811.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 cp multinode-328811:/home/docker/cp-test.txt multinode-328811-m02:/home/docker/cp-test_multinode-328811_multinode-328811-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811-m02 "sudo cat /home/docker/cp-test_multinode-328811_multinode-328811-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 cp multinode-328811:/home/docker/cp-test.txt multinode-328811-m03:/home/docker/cp-test_multinode-328811_multinode-328811-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811-m03 "sudo cat /home/docker/cp-test_multinode-328811_multinode-328811-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 cp testdata/cp-test.txt multinode-328811-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 cp multinode-328811-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1284641700/001/cp-test_multinode-328811-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 cp multinode-328811-m02:/home/docker/cp-test.txt multinode-328811:/home/docker/cp-test_multinode-328811-m02_multinode-328811.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811 "sudo cat /home/docker/cp-test_multinode-328811-m02_multinode-328811.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 cp multinode-328811-m02:/home/docker/cp-test.txt multinode-328811-m03:/home/docker/cp-test_multinode-328811-m02_multinode-328811-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811-m03 "sudo cat /home/docker/cp-test_multinode-328811-m02_multinode-328811-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 cp testdata/cp-test.txt multinode-328811-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 cp multinode-328811-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1284641700/001/cp-test_multinode-328811-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 cp multinode-328811-m03:/home/docker/cp-test.txt multinode-328811:/home/docker/cp-test_multinode-328811-m03_multinode-328811.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811 "sudo cat /home/docker/cp-test_multinode-328811-m03_multinode-328811.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 cp multinode-328811-m03:/home/docker/cp-test.txt multinode-328811-m02:/home/docker/cp-test_multinode-328811-m03_multinode-328811-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 ssh -n multinode-328811-m02 "sudo cat /home/docker/cp-test_multinode-328811-m03_multinode-328811-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-328811 node stop m03: (1.368575712s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-328811 status: exit status 7 (597.532044ms)

                                                
                                                
-- stdout --
	multinode-328811
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-328811-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-328811-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-328811 status --alsologtostderr: exit status 7 (537.070629ms)

                                                
                                                
-- stdout --
	multinode-328811
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-328811-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-328811-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:47:00.979579  411914 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:47:00.980011  411914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:47:00.980025  411914 out.go:374] Setting ErrFile to fd 2...
	I0903 23:47:00.980032  411914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:47:00.980530  411914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
	I0903 23:47:00.980923  411914 out.go:368] Setting JSON to false
	I0903 23:47:00.980989  411914 mustload.go:65] Loading cluster: multinode-328811
	I0903 23:47:00.981831  411914 config.go:182] Loaded profile config "multinode-328811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:47:00.981890  411914 status.go:174] checking status of multinode-328811 ...
	I0903 23:47:00.982701  411914 cli_runner.go:164] Run: docker container inspect multinode-328811 --format={{.State.Status}}
	I0903 23:47:00.984979  411914 notify.go:220] Checking for updates...
	I0903 23:47:01.011812  411914 status.go:371] multinode-328811 host status = "Running" (err=<nil>)
	I0903 23:47:01.011836  411914 host.go:66] Checking if "multinode-328811" exists ...
	I0903 23:47:01.012142  411914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-328811
	I0903 23:47:01.032333  411914 host.go:66] Checking if "multinode-328811" exists ...
	I0903 23:47:01.032735  411914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0903 23:47:01.032788  411914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-328811
	I0903 23:47:01.054935  411914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/multinode-328811/id_rsa Username:docker}
	I0903 23:47:01.145887  411914 ssh_runner.go:195] Run: systemctl --version
	I0903 23:47:01.150790  411914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:47:01.163940  411914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0903 23:47:01.227350  411914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-03 23:47:01.216630779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0903 23:47:01.228052  411914 kubeconfig.go:125] found "multinode-328811" server: "https://192.168.67.2:8443"
	I0903 23:47:01.228105  411914 api_server.go:166] Checking apiserver status ...
	I0903 23:47:01.228935  411914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0903 23:47:01.241509  411914 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1382/cgroup
	I0903 23:47:01.251624  411914 api_server.go:182] apiserver freezer: "12:freezer:/docker/2780b9aa7b63c7fb62b81e841116fe1eae661a29c2684f18b4d8e3a21e7b47e3/crio/crio-d9d83725ee7d08f5859b6741e0d63803c31e7a5733f36634363acb9e917481de"
	I0903 23:47:01.251722  411914 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2780b9aa7b63c7fb62b81e841116fe1eae661a29c2684f18b4d8e3a21e7b47e3/crio/crio-d9d83725ee7d08f5859b6741e0d63803c31e7a5733f36634363acb9e917481de/freezer.state
	I0903 23:47:01.261635  411914 api_server.go:204] freezer state: "THAWED"
	I0903 23:47:01.261667  411914 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0903 23:47:01.270367  411914 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0903 23:47:01.270406  411914 status.go:463] multinode-328811 apiserver status = Running (err=<nil>)
	I0903 23:47:01.270419  411914 status.go:176] multinode-328811 status: &{Name:multinode-328811 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 23:47:01.270440  411914 status.go:174] checking status of multinode-328811-m02 ...
	I0903 23:47:01.270794  411914 cli_runner.go:164] Run: docker container inspect multinode-328811-m02 --format={{.State.Status}}
	I0903 23:47:01.289423  411914 status.go:371] multinode-328811-m02 host status = "Running" (err=<nil>)
	I0903 23:47:01.289484  411914 host.go:66] Checking if "multinode-328811-m02" exists ...
	I0903 23:47:01.289810  411914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-328811-m02
	I0903 23:47:01.308612  411914 host.go:66] Checking if "multinode-328811-m02" exists ...
	I0903 23:47:01.308936  411914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0903 23:47:01.308986  411914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-328811-m02
	I0903 23:47:01.326928  411914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21341-295927/.minikube/machines/multinode-328811-m02/id_rsa Username:docker}
	I0903 23:47:01.421057  411914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0903 23:47:01.433791  411914 status.go:176] multinode-328811-m02 status: &{Name:multinode-328811-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0903 23:47:01.433831  411914 status.go:174] checking status of multinode-328811-m03 ...
	I0903 23:47:01.434127  411914 cli_runner.go:164] Run: docker container inspect multinode-328811-m03 --format={{.State.Status}}
	I0903 23:47:01.454527  411914 status.go:371] multinode-328811-m03 host status = "Stopped" (err=<nil>)
	I0903 23:47:01.454579  411914 status.go:384] host is not running, skipping remaining checks
	I0903 23:47:01.454587  411914 status.go:176] multinode-328811-m03 status: &{Name:multinode-328811-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.50s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-328811 node start m03 -v=5 --alsologtostderr: (7.299643797s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-328811
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-328811
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-328811: (24.812197192s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-328811 --wait=true -v=5 --alsologtostderr
E0903 23:48:02.118953  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-328811 --wait=true -v=5 --alsologtostderr: (51.755637675s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-328811
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-328811 node delete m03: (4.696527913s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-328811 stop: (23.648585338s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-328811 status: exit status 7 (92.825602ms)

                                                
                                                
-- stdout --
	multinode-328811
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-328811-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-328811 status --alsologtostderr: exit status 7 (94.24891ms)

                                                
                                                
-- stdout --
	multinode-328811
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-328811-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0903 23:48:55.374691  419732 out.go:360] Setting OutFile to fd 1 ...
	I0903 23:48:55.374889  419732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:48:55.374917  419732 out.go:374] Setting ErrFile to fd 2...
	I0903 23:48:55.374935  419732 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0903 23:48:55.375243  419732 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
	I0903 23:48:55.375480  419732 out.go:368] Setting JSON to false
	I0903 23:48:55.375548  419732 mustload.go:65] Loading cluster: multinode-328811
	I0903 23:48:55.375636  419732 notify.go:220] Checking for updates...
	I0903 23:48:55.376019  419732 config.go:182] Loaded profile config "multinode-328811": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0903 23:48:55.376043  419732 status.go:174] checking status of multinode-328811 ...
	I0903 23:48:55.376571  419732 cli_runner.go:164] Run: docker container inspect multinode-328811 --format={{.State.Status}}
	I0903 23:48:55.396074  419732 status.go:371] multinode-328811 host status = "Stopped" (err=<nil>)
	I0903 23:48:55.396097  419732 status.go:384] host is not running, skipping remaining checks
	I0903 23:48:55.396103  419732 status.go:176] multinode-328811 status: &{Name:multinode-328811 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0903 23:48:55.396139  419732 status.go:174] checking status of multinode-328811-m02 ...
	I0903 23:48:55.396454  419732 cli_runner.go:164] Run: docker container inspect multinode-328811-m02 --format={{.State.Status}}
	I0903 23:48:55.415879  419732 status.go:371] multinode-328811-m02 host status = "Stopped" (err=<nil>)
	I0903 23:48:55.415900  419732 status.go:384] host is not running, skipping remaining checks
	I0903 23:48:55.415907  419732 status.go:176] multinode-328811-m02 status: &{Name:multinode-328811-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-328811 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0903 23:49:37.615835  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-328811 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.697024486s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-328811 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-328811
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-328811-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-328811-m02 --driver=docker  --container-runtime=crio: exit status 14 (97.686729ms)

                                                
                                                
-- stdout --
	* [multinode-328811-m02] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-328811-m02' is duplicated with machine name 'multinode-328811-m02' in profile 'multinode-328811'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-328811-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-328811-m03 --driver=docker  --container-runtime=crio: (32.216291063s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-328811
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-328811: exit status 80 (335.241103ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-328811 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-328811-m03 already exists in multinode-328811-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-328811-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-328811-m03: (1.917596819s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.62s)

                                                
                                    
x
+
TestPreload (141.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-213372 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-213372 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m36.008137164s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-213372 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-213372 image pull gcr.io/k8s-minikube/busybox: (3.637944371s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-213372
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-213372: (5.809796277s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-213372 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-213372 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (33.470273034s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-213372 image list
helpers_test.go:175: Cleaning up "test-preload-213372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-213372
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-213372: (2.362310538s)
--- PASS: TestPreload (141.52s)

                                                
                                    
x
+
TestScheduledStopUnix (104.58s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-324354 --memory=3072 --driver=docker  --container-runtime=crio
E0903 23:53:02.120736  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-324354 --memory=3072 --driver=docker  --container-runtime=crio: (28.773926165s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-324354 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-324354 -n scheduled-stop-324354
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-324354 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0903 23:53:14.250363  297789 retry.go:31] will retry after 108.655µs: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.250770  297789 retry.go:31] will retry after 215.841µs: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.251887  297789 retry.go:31] will retry after 259.234µs: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.253041  297789 retry.go:31] will retry after 295.394µs: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.254165  297789 retry.go:31] will retry after 335.866µs: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.255305  297789 retry.go:31] will retry after 935.587µs: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.256528  297789 retry.go:31] will retry after 1.044923ms: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.257668  297789 retry.go:31] will retry after 1.783387ms: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.260108  297789 retry.go:31] will retry after 1.658916ms: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.262334  297789 retry.go:31] will retry after 4.729006ms: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.269128  297789 retry.go:31] will retry after 7.872973ms: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.277379  297789 retry.go:31] will retry after 11.053658ms: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.288551  297789 retry.go:31] will retry after 9.248645ms: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.298068  297789 retry.go:31] will retry after 22.763008ms: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.321292  297789 retry.go:31] will retry after 32.269081ms: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
I0903 23:53:14.354598  297789 retry.go:31] will retry after 58.033035ms: open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/scheduled-stop-324354/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-324354 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-324354 -n scheduled-stop-324354
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-324354
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-324354 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-324354
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-324354: exit status 7 (69.199427ms)

                                                
                                                
-- stdout --
	scheduled-stop-324354
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-324354 -n scheduled-stop-324354
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-324354 -n scheduled-stop-324354: exit status 7 (72.300433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-324354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-324354
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-324354: (4.159302592s)
--- PASS: TestScheduledStopUnix (104.58s)

                                                
                                    
x
+
TestInsufficientStorage (10.46s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-582140 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-582140 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.016233697s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1f75328f-59a6-41e8-af3d-d0ab18fdb509","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-582140] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1505211-a057-46dc-8ccc-5a12f21c8f48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21341"}}
	{"specversion":"1.0","id":"1c9ebaaf-5c3c-4a65-91a2-b122e177c09f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"706eeeb4-4696-40ab-b421-f56ed4976c62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig"}}
	{"specversion":"1.0","id":"69cc4760-3cdf-4f08-a76f-b6eca730078e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube"}}
	{"specversion":"1.0","id":"18053113-4407-44fe-a869-a4d6cfd9c133","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1a6816eb-57bd-487e-b5ad-a2e53b8bb04e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9e6d0670-9809-4e93-a70c-d046d2c293d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"cd8a9b08-6c70-49fa-9805-d055af166197","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"37cceb90-fd23-4320-9f7e-537550d1ce60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"75b95a55-8292-46e2-ad35-28e2147b1524","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"bdebde2e-1b25-43c8-80f3-27af842201c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-582140\" primary control-plane node in \"insufficient-storage-582140\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d1baebf-beec-4cac-b19e-8512143ce0c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756116447-21413 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ea71c325-f627-4376-aa67-ba88b1963a91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1d0bd612-680b-4163-b253-b8713e870156","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-582140 --output=json --layout=cluster
E0903 23:54:37.616093  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-582140 --output=json --layout=cluster: exit status 7 (279.744142ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-582140","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-582140","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0903 23:54:37.810953  437282 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-582140" does not appear in /home/jenkins/minikube-integration/21341-295927/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-582140 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-582140 --output=json --layout=cluster: exit status 7 (292.531398ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-582140","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-582140","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0903 23:54:38.105172  437346 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-582140" does not appear in /home/jenkins/minikube-integration/21341-295927/kubeconfig
	E0903 23:54:38.115849  437346 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/insufficient-storage-582140/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-582140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-582140
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-582140: (1.870991395s)
--- PASS: TestInsufficientStorage (10.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.43s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1298613162 start -p running-upgrade-222218 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1298613162 start -p running-upgrade-222218 --memory=3072 --vm-driver=docker  --container-runtime=crio: (35.961110189s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-222218 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0903 23:59:37.615952  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-222218 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.708401703s)
helpers_test.go:175: Cleaning up "running-upgrade-222218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-222218
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-222218: (2.913070257s)
--- PASS: TestRunningBinaryUpgrade (79.43s)

                                                
                                    
x
+
TestKubernetesUpgrade (383.75s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-077786 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-077786 --memory=3072 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m7.745681832s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-077786
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-077786: (1.256475209s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-077786 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-077786 status --format={{.Host}}: exit status 7 (99.637772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-077786 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-077786 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m39.755320308s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-077786 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-077786 --memory=3072 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-077786 --memory=3072 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (113.232101ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-077786] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-077786
	    minikube start -p kubernetes-upgrade-077786 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0777862 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-077786 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-077786 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-077786 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.820565479s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-077786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-077786
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-077786: (2.871590839s)
--- PASS: TestKubernetesUpgrade (383.75s)

                                                
                                    
x
+
TestMissingContainerUpgrade (173.95s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3331482118 start -p missing-upgrade-521146 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3331482118 start -p missing-upgrade-521146 --memory=3072 --driver=docker  --container-runtime=crio: (1m32.67801557s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-521146
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-521146: (10.450671933s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-521146
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-521146 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-521146 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m8.04129623s)
helpers_test.go:175: Cleaning up "missing-upgrade-521146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-521146
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-521146: (2.060103305s)
--- PASS: TestMissingContainerUpgrade (173.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-560367 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-560367 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (133.662794ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-560367] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-560367 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-560367 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.689394644s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-560367 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-560367 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-560367 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.324079566s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-560367 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-560367 status -o json: exit status 2 (284.643977ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-560367","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-560367
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-560367: (2.146515098s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-560367 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-560367 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.180796366s)
--- PASS: TestNoKubernetes/serial/Start (8.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-560367 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-560367 "sudo systemctl is-active --quiet service kubelet": exit status 1 (387.412427ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-560367
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-560367: (1.205356559s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-560367 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-560367 --driver=docker  --container-runtime=crio: (7.29928593s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-560367 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-560367 "sudo systemctl is-active --quiet service kubelet": exit status 1 (347.863781ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (73.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3286548194 start -p stopped-upgrade-882660 --memory=3072 --vm-driver=docker  --container-runtime=crio
E0903 23:58:02.120944  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3286548194 start -p stopped-upgrade-882660 --memory=3072 --vm-driver=docker  --container-runtime=crio: (36.067877707s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3286548194 -p stopped-upgrade-882660 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3286548194 -p stopped-upgrade-882660 stop: (2.555368935s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-882660 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-882660 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.035034414s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (73.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-882660
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-882660: (1.472477407s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.47s)

                                                
                                    
x
+
TestPause/serial/Start (81.9s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-280544 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-280544 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.904382768s)
--- PASS: TestPause/serial/Start (81.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.48s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-280544 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-280544 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.451531949s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (43.48s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-280544 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-280544 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-280544 --output=json --layout=cluster: exit status 2 (335.904263ms)

                                                
                                                
-- stdout --
	{"Name":"pause-280544","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-280544","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.05s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-280544 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-280544 --alsologtostderr -v=5: (1.046606713s)
--- PASS: TestPause/serial/Unpause (1.05s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.5s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-280544 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-280544 --alsologtostderr -v=5: (1.496878651s)
--- PASS: TestPause/serial/PauseAgain (1.50s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.74s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-280544 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-280544 --alsologtostderr -v=5: (2.73912783s)
--- PASS: TestPause/serial/DeletePaused (2.74s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-280544
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-280544: exit status 1 (26.383801ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-280544: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-124788 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-124788 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (276.618353ms)

                                                
                                                
-- stdout --
	* [false-124788] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21341
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0904 00:02:29.325826  477387 out.go:360] Setting OutFile to fd 1 ...
	I0904 00:02:29.326124  477387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 00:02:29.326156  477387 out.go:374] Setting ErrFile to fd 2...
	I0904 00:02:29.326178  477387 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0904 00:02:29.326484  477387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21341-295927/.minikube/bin
	I0904 00:02:29.326940  477387 out.go:368] Setting JSON to false
	I0904 00:02:29.327991  477387 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9900,"bootTime":1756934250,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0904 00:02:29.328088  477387 start.go:140] virtualization:  
	I0904 00:02:29.333925  477387 out.go:179] * [false-124788] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0904 00:02:29.337139  477387 out.go:179]   - MINIKUBE_LOCATION=21341
	I0904 00:02:29.337216  477387 notify.go:220] Checking for updates...
	I0904 00:02:29.341117  477387 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0904 00:02:29.344718  477387 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21341-295927/kubeconfig
	I0904 00:02:29.347635  477387 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21341-295927/.minikube
	I0904 00:02:29.351015  477387 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0904 00:02:29.354808  477387 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0904 00:02:29.358373  477387 config.go:182] Loaded profile config "force-systemd-flag-293997": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0904 00:02:29.358543  477387 driver.go:421] Setting default libvirt URI to qemu:///system
	I0904 00:02:29.397960  477387 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0904 00:02:29.398085  477387 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0904 00:02:29.504965  477387 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-04 00:02:29.495948977 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0904 00:02:29.505076  477387 docker.go:318] overlay module found
	I0904 00:02:29.508238  477387 out.go:179] * Using the docker driver based on user configuration
	I0904 00:02:29.511197  477387 start.go:304] selected driver: docker
	I0904 00:02:29.511222  477387 start.go:918] validating driver "docker" against <nil>
	I0904 00:02:29.511237  477387 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0904 00:02:29.514837  477387 out.go:203] 
	W0904 00:02:29.517988  477387 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0904 00:02:29.521006  477387 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-124788 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-124788

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-124788

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-124788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-124788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-124788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-124788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-124788

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-124788

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-124788

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-124788

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-124788

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-124788" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-124788" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-124788

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-124788"

                                                
                                                
----------------------- debugLogs end: false-124788 [took: 4.318903573s] --------------------------------
helpers_test.go:175: Cleaning up "false-124788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-124788
--- PASS: TestNetworkPlugins/group/false (4.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (149.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-372074 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E0904 00:04:37.616582  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-372074 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m29.791506379s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (149.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-372074 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [78ec4473-d75d-46cb-8754-44c09e856fdc] Pending
helpers_test.go:352: "busybox" [78ec4473-d75d-46cb-8754-44c09e856fdc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [78ec4473-d75d-46cb-8754-44c09e856fdc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003865526s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-372074 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-372074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-372074 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-372074 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-372074 --alsologtostderr -v=3: (12.175864276s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-372074 -n old-k8s-version-372074
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-372074 -n old-k8s-version-372074: exit status 7 (131.719465ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-372074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (108.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-372074 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-372074 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (1m48.02634322s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-372074 -n old-k8s-version-372074
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (108.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-377064 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 00:08:02.119092  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-377064 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m19.190395054s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-377064 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5abce6bd-1358-4b4e-ba22-b444c7bc1465] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5abce6bd-1358-4b4e-ba22-b444c7bc1465] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003962298s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-377064 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-377064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-377064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037786387s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-377064 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-377064 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-377064 --alsologtostderr -v=3: (12.033226965s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-cd95d586-j6jnj" [65778599-03c8-4b8b-b3ac-32da42497258] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004454549s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-377064 -n no-preload-377064
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-377064 -n no-preload-377064: exit status 7 (92.638814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-377064 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (58.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-377064 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-377064 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (57.759231772s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-377064 -n no-preload-377064
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (58.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-cd95d586-j6jnj" [65778599-03c8-4b8b-b3ac-32da42497258] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003391043s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-372074 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-372074 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-372074 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-372074 --alsologtostderr -v=1: (1.246857832s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-372074 -n old-k8s-version-372074
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-372074 -n old-k8s-version-372074: exit status 2 (466.180666ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-372074 -n old-k8s-version-372074
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-372074 -n old-k8s-version-372074: exit status 2 (470.975857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-372074 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-372074 --alsologtostderr -v=1: (1.248763739s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-372074 -n old-k8s-version-372074
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-372074 -n old-k8s-version-372074
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-691755 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 00:09:37.616286  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-691755 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m24.213456969s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-p5lww" [ce7896ee-fc69-4e9e-a007-ad18c17301ce] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002701934s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-p5lww" [ce7896ee-fc69-4e9e-a007-ad18c17301ce] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004151811s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-377064 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-377064 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-377064 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-377064 -n no-preload-377064
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-377064 -n no-preload-377064: exit status 2 (332.183908ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-377064 -n no-preload-377064
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-377064 -n no-preload-377064: exit status 2 (334.603173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-377064 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-377064 -n no-preload-377064
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-377064 -n no-preload-377064
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-472493 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-472493 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m19.497597563s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-691755 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9d962c19-966e-4da9-a7f4-a3a279e82fd6] Pending
helpers_test.go:352: "busybox" [9d962c19-966e-4da9-a7f4-a3a279e82fd6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9d962c19-966e-4da9-a7f4-a3a279e82fd6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003613281s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-691755 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-691755 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-691755 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.151686568s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-691755 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-691755 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-691755 --alsologtostderr -v=3: (11.972523034s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-691755 -n embed-certs-691755
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-691755 -n embed-certs-691755: exit status 7 (85.311245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-691755 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-691755 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-691755 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (54.935173464s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-691755 -n embed-certs-691755
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-472493 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2356c06d-b9b1-44c8-a7ff-d21903cd3773] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2356c06d-b9b1-44c8-a7ff-d21903cd3773] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003806219s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-472493 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-472493 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0904 00:11:28.797539  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:11:28.803950  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:11:28.815303  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:11:28.836717  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:11:28.878110  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:11:28.959447  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:11:29.120820  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-472493 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.02953604s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-472493 describe deploy/metrics-server -n kube-system
E0904 00:11:29.442253  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-472493 --alsologtostderr -v=3
E0904 00:11:30.084456  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:11:31.366398  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:11:33.928200  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:11:39.050316  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-472493 --alsologtostderr -v=3: (11.946507505s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-472493 -n default-k8s-diff-port-472493
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-472493 -n default-k8s-diff-port-472493: exit status 7 (76.068573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-472493 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-472493 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-472493 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (58.128096961s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-472493 -n default-k8s-diff-port-472493
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-57t52" [a4819942-aa20-44f3-90ae-c47ad307f43a] Running
E0904 00:11:49.292343  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004100581s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-57t52" [a4819942-aa20-44f3-90ae-c47ad307f43a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006083678s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-691755 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-691755 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-691755 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-691755 --alsologtostderr -v=1: (1.336501491s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-691755 -n embed-certs-691755
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-691755 -n embed-certs-691755: exit status 2 (428.189138ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-691755 -n embed-certs-691755
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-691755 -n embed-certs-691755: exit status 2 (346.704544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-691755 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-691755 -n embed-certs-691755
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-691755 -n embed-certs-691755
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-883122 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 00:12:09.773669  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-883122 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (41.316871081s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2xk2q" [2ec05282-71e7-40c8-ae39-1ecd2074226c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004442905s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-883122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-883122 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.105947351s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-2xk2q" [2ec05282-71e7-40c8-ae39-1ecd2074226c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004413121s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-472493 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-883122 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-883122 --alsologtostderr -v=3: (1.240362116s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-883122 -n newest-cni-883122
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-883122 -n newest-cni-883122: exit status 7 (72.696175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-883122 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-883122 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0904 00:12:50.735008  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-883122 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (19.964403082s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-883122 -n newest-cni-883122
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-472493 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-472493 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-472493 --alsologtostderr -v=1: (1.002035877s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-472493 -n default-k8s-diff-port-472493
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-472493 -n default-k8s-diff-port-472493: exit status 2 (414.061084ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-472493 -n default-k8s-diff-port-472493
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-472493 -n default-k8s-diff-port-472493: exit status 2 (391.535142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-472493 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-472493 -n default-k8s-diff-port-472493
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-472493 -n default-k8s-diff-port-472493
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0904 00:13:02.119827  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m22.908283147s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-883122 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-883122 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-883122 --alsologtostderr -v=1: (1.37712079s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-883122 -n newest-cni-883122
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-883122 -n newest-cni-883122: exit status 2 (403.289768ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-883122 -n newest-cni-883122
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-883122 -n newest-cni-883122: exit status 2 (387.940237ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-883122 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-883122 -n newest-cni-883122
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-883122 -n newest-cni-883122
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.02s)
E0904 00:18:45.610930  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:03.121326  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:11.037796  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:11.044294  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:11.055825  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:11.077323  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:11.118830  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:11.200341  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:11.361921  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:11.683592  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:12.325309  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:13.606734  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:16.168462  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:20.685961  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:21.290666  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:24.392869  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/auto-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:24.399393  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/auto-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:24.410764  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/auto-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:24.432415  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/auto-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:24.473903  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/auto-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:24.555606  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/auto-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:24.717236  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/auto-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:25.038984  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/auto-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:25.196479  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:25.680824  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/auto-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:26.962241  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/auto-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:29.523889  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/auto-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:31.532695  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:34.646108  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/auto-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:37.615584  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/functional-062474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:19:44.887850  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/auto-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0904 00:13:17.909903  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:13:17.916169  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:13:17.927454  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:13:17.948787  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:13:17.990144  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:13:18.071476  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:13:18.232819  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:13:18.554455  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:13:19.196482  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:13:20.479102  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:13:23.040890  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:13:28.162148  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:13:38.403848  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:13:58.885994  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (55.473499505s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-bmzrl" [4d78dad0-bf04-47c0-a7ea-0d6ca1d46e8b] Running
E0904 00:14:12.657165  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.002961587s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-124788 "pgrep -a kubelet"
I0904 00:14:17.336812  297789 config.go:182] Loaded profile config "kindnet-124788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-124788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lj6b8" [2d236a2e-4d18-4745-a8d6-4711fb2c5bca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lj6b8" [2d236a2e-4d18-4745-a8d6-4711fb2c5bca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004218289s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-124788 "pgrep -a kubelet"
I0904 00:14:24.104931  297789 config.go:182] Loaded profile config "auto-124788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-124788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-85vqz" [5dc73dc3-ffec-43b4-916e-91f81acf4f5d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-85vqz" [5dc73dc3-ffec-43b4-916e-91f81acf4f5d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004311344s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-124788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-124788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m13.392645466s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (54.101208574s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-124788 "pgrep -a kubelet"
I0904 00:15:54.175500  297789 config.go:182] Loaded profile config "custom-flannel-124788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-124788 replace --force -f testdata/netcat-deployment.yaml
I0904 00:15:54.513104  297789 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mc8nr" [2560e38c-4ed8-4636-a975-659104221818] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mc8nr" [2560e38c-4ed8-4636-a975-659104221818] Running
E0904 00:16:01.769517  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/no-preload-377064/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003668007s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-rqzph" [40f6f1aa-0578-441c-ad31-521af6014ba3] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-rqzph" [40f6f1aa-0578-441c-ad31-521af6014ba3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004395868s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-124788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-124788 "pgrep -a kubelet"
I0904 00:16:11.972443  297789 config.go:182] Loaded profile config "calico-124788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-124788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kqvxx" [c8ca9788-966a-43fa-bcf8-d34cd0019137] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0904 00:16:19.258914  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-kqvxx" [c8ca9788-966a-43fa-bcf8-d34cd0019137] Running
E0904 00:16:19.266099  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:16:19.277494  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:16:19.300070  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:16:19.341621  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:16:19.423852  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:16:19.586091  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:16:19.907425  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:16:20.549282  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:16:21.831148  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:16:24.393114  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.003887557s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-124788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0904 00:16:39.756601  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m28.637363602s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0904 00:16:56.498705  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/old-k8s-version-372074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:17:00.237932  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0904 00:17:41.199556  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/default-k8s-diff-port-472493/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m6.348707597s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-544k2" [8ec73de8-ed27-4595-8b83-09ab1a5a59b7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00404473s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-124788 "pgrep -a kubelet"
I0904 00:17:59.203824  297789 config.go:182] Loaded profile config "enable-default-cni-124788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-124788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-j5rxv" [943676ce-48d2-4942-8772-5cd07b0ad0f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0904 00:18:02.119320  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/addons-250903/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-j5rxv" [943676ce-48d2-4942-8772-5cd07b0ad0f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003377253s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-124788 "pgrep -a kubelet"
I0904 00:18:05.089281  297789 config.go:182] Loaded profile config "flannel-124788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-124788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mcklr" [4861cacf-a676-4782-a894-8875b71b7feb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mcklr" [4861cacf-a676-4782-a894-8875b71b7feb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.002830173s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-124788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-124788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (71.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-124788 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m11.934420025s)
--- PASS: TestNetworkPlugins/group/bridge/Start (71.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-124788 "pgrep -a kubelet"
I0904 00:19:45.566921  297789 config.go:182] Loaded profile config "bridge-124788": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-124788 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8sb2s" [60be22d9-97c1-4d43-9321-95bdc3dd0e16] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8sb2s" [60be22d9-97c1-4d43-9321-95bdc3dd0e16] Running
E0904 00:19:52.014494  297789 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21341-295927/.minikube/profiles/kindnet-124788/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004357086s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-124788 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-124788 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (32/332)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.62s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-456061 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-456061" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-456061
--- SKIP: TestDownloadOnlyKic (0.62s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.33s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-250903 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-504127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-504127
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-124788 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-124788

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-124788

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-124788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-124788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-124788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-124788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-124788

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-124788

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-124788

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-124788

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-124788

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-124788" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-124788" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-124788

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-124788"

                                                
                                                
----------------------- debugLogs end: kubenet-124788 [took: 4.548443166s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-124788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-124788
--- SKIP: TestNetworkPlugins/group/kubenet (4.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-124788 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-124788" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-124788

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-124788" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-124788"

                                                
                                                
----------------------- debugLogs end: cilium-124788 [took: 4.85184012s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-124788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-124788
--- SKIP: TestNetworkPlugins/group/cilium (5.03s)

                                                
                                    
Copied to clipboard