Test Report: Docker_Linux_crio_arm64 21508

                    
                      8932374f20a738e68cf28dc9e127463468f1eb30:2025-09-08:41334
                    
                

Test fail (6/332)

Order failed test Duration
37 TestAddons/parallel/Ingress 153.31
98 TestFunctional/parallel/ServiceCmdConnect 604.03
147 TestFunctional/parallel/ServiceCmd/DeployApp 600.88
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.57
154 TestFunctional/parallel/ServiceCmd/Format 0.55
155 TestFunctional/parallel/ServiceCmd/URL 0.54
x
+
TestAddons/parallel/Ingress (153.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-090979 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-090979 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-090979 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [d67dab80-b279-4809-8fda-170d80fb75e6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [d67dab80-b279-4809-8fda-170d80fb75e6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003829964s
I0908 12:38:16.106628  560849 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-090979 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.047303058s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-090979 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-090979
helpers_test.go:243: (dbg) docker inspect addons-090979:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c0b2b1561bc05f50862f30fcec130c66e1f3e485c8f875e8b8cb86d54c1b8f12",
	        "Created": "2025-09-08T12:33:59.123220152Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 561993,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T12:33:59.184353931Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/c0b2b1561bc05f50862f30fcec130c66e1f3e485c8f875e8b8cb86d54c1b8f12/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c0b2b1561bc05f50862f30fcec130c66e1f3e485c8f875e8b8cb86d54c1b8f12/hostname",
	        "HostsPath": "/var/lib/docker/containers/c0b2b1561bc05f50862f30fcec130c66e1f3e485c8f875e8b8cb86d54c1b8f12/hosts",
	        "LogPath": "/var/lib/docker/containers/c0b2b1561bc05f50862f30fcec130c66e1f3e485c8f875e8b8cb86d54c1b8f12/c0b2b1561bc05f50862f30fcec130c66e1f3e485c8f875e8b8cb86d54c1b8f12-json.log",
	        "Name": "/addons-090979",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-090979:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-090979",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c0b2b1561bc05f50862f30fcec130c66e1f3e485c8f875e8b8cb86d54c1b8f12",
	                "LowerDir": "/var/lib/docker/overlay2/bce68386df4a765b06b02e8b1472de6731c9a009094b9dd18f47a02dfeb6953c-init/diff:/var/lib/docker/overlay2/194ba2667b0da80d09d69a06dabfcbc80057d4e7ee5de99b71c65d9470b74398/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bce68386df4a765b06b02e8b1472de6731c9a009094b9dd18f47a02dfeb6953c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bce68386df4a765b06b02e8b1472de6731c9a009094b9dd18f47a02dfeb6953c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bce68386df4a765b06b02e8b1472de6731c9a009094b9dd18f47a02dfeb6953c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-090979",
	                "Source": "/var/lib/docker/volumes/addons-090979/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-090979",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-090979",
	                "name.minikube.sigs.k8s.io": "addons-090979",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "02a252cd037f5c858464da428e2c7084c7513e300fda2bd847becc57243d0cd1",
	            "SandboxKey": "/var/run/docker/netns/02a252cd037f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-090979": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:51:3c:61:42:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8fd01d345d505f3ccbe516cab6ff3a611e7765b89cdda30115bcff70fbb6c978",
	                    "EndpointID": "3f8360c86457d7074d5e1bb7a3263967a376dd237d9bc9ff96b1f29785f00199",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-090979",
	                        "c0b2b1561bc0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-090979 -n addons-090979
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-090979 logs -n 25: (1.742240545s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-574217                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-574217 │ jenkins │ v1.36.0 │ 08 Sep 25 12:33 UTC │ 08 Sep 25 12:33 UTC │
	│ start   │ --download-only -p binary-mirror-724699 --alsologtostderr --binary-mirror http://127.0.0.1:41151 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-724699   │ jenkins │ v1.36.0 │ 08 Sep 25 12:33 UTC │                     │
	│ delete  │ -p binary-mirror-724699                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-724699   │ jenkins │ v1.36.0 │ 08 Sep 25 12:33 UTC │ 08 Sep 25 12:33 UTC │
	│ addons  │ enable dashboard -p addons-090979                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:33 UTC │                     │
	│ addons  │ disable dashboard -p addons-090979                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:33 UTC │                     │
	│ start   │ -p addons-090979 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:33 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ addons-090979 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ addons-090979 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ enable headlamp -p addons-090979 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:36 UTC │ 08 Sep 25 12:36 UTC │
	│ addons  │ addons-090979 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:37 UTC │ 08 Sep 25 12:37 UTC │
	│ ip      │ addons-090979 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:37 UTC │ 08 Sep 25 12:37 UTC │
	│ addons  │ addons-090979 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:37 UTC │ 08 Sep 25 12:37 UTC │
	│ addons  │ addons-090979 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:37 UTC │ 08 Sep 25 12:37 UTC │
	│ addons  │ addons-090979 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:37 UTC │ 08 Sep 25 12:37 UTC │
	│ ssh     │ addons-090979 ssh cat /opt/local-path-provisioner/pvc-9785f1bd-055d-44c5-a947-2a5a3a5a8e1c_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:37 UTC │ 08 Sep 25 12:37 UTC │
	│ addons  │ addons-090979 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:37 UTC │ 08 Sep 25 12:38 UTC │
	│ addons  │ addons-090979 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:37 UTC │ 08 Sep 25 12:37 UTC │
	│ addons  │ addons-090979 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:37 UTC │ 08 Sep 25 12:37 UTC │
	│ addons  │ addons-090979 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:38 UTC │ 08 Sep 25 12:38 UTC │
	│ addons  │ addons-090979 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:38 UTC │ 08 Sep 25 12:38 UTC │
	│ ssh     │ addons-090979 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:38 UTC │                     │
	│ addons  │ addons-090979 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:38 UTC │ 08 Sep 25 12:38 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-090979                                                                                                                                                                                                                                                                                                                                                                                           │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:38 UTC │ 08 Sep 25 12:38 UTC │
	│ addons  │ addons-090979 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:38 UTC │ 08 Sep 25 12:38 UTC │
	│ ip      │ addons-090979 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-090979          │ jenkins │ v1.36.0 │ 08 Sep 25 12:40 UTC │ 08 Sep 25 12:40 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:33:34
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:33:34.066514  561600 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:33:34.066731  561600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:33:34.066763  561600 out.go:374] Setting ErrFile to fd 2...
	I0908 12:33:34.066785  561600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:33:34.067125  561600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
	I0908 12:33:34.067647  561600 out.go:368] Setting JSON to false
	I0908 12:33:34.068553  561600 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8166,"bootTime":1757326648,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 12:33:34.068653  561600 start.go:140] virtualization:  
	I0908 12:33:34.070205  561600 out.go:179] * [addons-090979] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 12:33:34.071704  561600 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 12:33:34.071883  561600 notify.go:220] Checking for updates...
	I0908 12:33:34.074529  561600 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:33:34.076161  561600 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig
	I0908 12:33:34.077344  561600 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube
	I0908 12:33:34.078688  561600 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 12:33:34.079868  561600 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:33:34.081345  561600 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:33:34.101450  561600 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:33:34.101574  561600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:33:34.166722  561600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-08 12:33:34.157749364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:33:34.166838  561600 docker.go:318] overlay module found
	I0908 12:33:34.168472  561600 out.go:179] * Using the docker driver based on user configuration
	I0908 12:33:34.169585  561600 start.go:304] selected driver: docker
	I0908 12:33:34.169613  561600 start.go:918] validating driver "docker" against <nil>
	I0908 12:33:34.169630  561600 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:33:34.170371  561600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:33:34.226680  561600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-09-08 12:33:34.217695204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:33:34.226833  561600 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 12:33:34.227100  561600 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:33:34.228307  561600 out.go:179] * Using Docker driver with root privileges
	I0908 12:33:34.229404  561600 cni.go:84] Creating CNI manager for ""
	I0908 12:33:34.229468  561600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:33:34.229477  561600 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 12:33:34.229559  561600 start.go:348] cluster config:
	{Name:addons-090979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-090979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0908 12:33:34.231100  561600 out.go:179] * Starting "addons-090979" primary control-plane node in "addons-090979" cluster
	I0908 12:33:34.232244  561600 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 12:33:34.233529  561600 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:33:34.234557  561600 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:33:34.234611  561600 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-558996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0908 12:33:34.234623  561600 cache.go:58] Caching tarball of preloaded images
	I0908 12:33:34.234634  561600 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:33:34.234706  561600 preload.go:172] Found /home/jenkins/minikube-integration/21508-558996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0908 12:33:34.234716  561600 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 12:33:34.235070  561600 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/config.json ...
	I0908 12:33:34.235103  561600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/config.json: {Name:mk3fadd61e0ca2589823e997943f6d5d0159a38b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:33:34.251157  561600 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 12:33:34.251286  561600 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 12:33:34.251326  561600 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 12:33:34.251337  561600 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 12:33:34.251345  561600 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 12:33:34.251356  561600 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from local cache
	I0908 12:33:52.191519  561600 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 from cached tarball
	I0908 12:33:52.191558  561600 cache.go:232] Successfully downloaded all kic artifacts
	I0908 12:33:52.191598  561600 start.go:360] acquireMachinesLock for addons-090979: {Name:mkc27947af5da833d8c6523f258142cbca94abdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:33:52.191731  561600 start.go:364] duration metric: took 109.719µs to acquireMachinesLock for "addons-090979"
	I0908 12:33:52.191763  561600 start.go:93] Provisioning new machine with config: &{Name:addons-090979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-090979 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 12:33:52.191835  561600 start.go:125] createHost starting for "" (driver="docker")
	I0908 12:33:52.195488  561600 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0908 12:33:52.195786  561600 start.go:159] libmachine.API.Create for "addons-090979" (driver="docker")
	I0908 12:33:52.195845  561600 client.go:168] LocalClient.Create starting
	I0908 12:33:52.195980  561600 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca.pem
	I0908 12:33:53.021299  561600 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/cert.pem
	I0908 12:33:53.321124  561600 cli_runner.go:164] Run: docker network inspect addons-090979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0908 12:33:53.337662  561600 cli_runner.go:211] docker network inspect addons-090979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0908 12:33:53.337754  561600 network_create.go:284] running [docker network inspect addons-090979] to gather additional debugging logs...
	I0908 12:33:53.337814  561600 cli_runner.go:164] Run: docker network inspect addons-090979
	W0908 12:33:53.352952  561600 cli_runner.go:211] docker network inspect addons-090979 returned with exit code 1
	I0908 12:33:53.352984  561600 network_create.go:287] error running [docker network inspect addons-090979]: docker network inspect addons-090979: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-090979 not found
	I0908 12:33:53.352997  561600 network_create.go:289] output of [docker network inspect addons-090979]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-090979 not found
	
	** /stderr **
	I0908 12:33:53.353087  561600 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:33:53.368976  561600 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a34930}
	I0908 12:33:53.369020  561600 network_create.go:124] attempt to create docker network addons-090979 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0908 12:33:53.369077  561600 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-090979 addons-090979
	I0908 12:33:53.430641  561600 network_create.go:108] docker network addons-090979 192.168.49.0/24 created
	I0908 12:33:53.430675  561600 kic.go:121] calculated static IP "192.168.49.2" for the "addons-090979" container
	I0908 12:33:53.430761  561600 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0908 12:33:53.446521  561600 cli_runner.go:164] Run: docker volume create addons-090979 --label name.minikube.sigs.k8s.io=addons-090979 --label created_by.minikube.sigs.k8s.io=true
	I0908 12:33:53.464201  561600 oci.go:103] Successfully created a docker volume addons-090979
	I0908 12:33:53.464301  561600 cli_runner.go:164] Run: docker run --rm --name addons-090979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-090979 --entrypoint /usr/bin/test -v addons-090979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib
	I0908 12:33:54.824843  561600 cli_runner.go:217] Completed: docker run --rm --name addons-090979-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-090979 --entrypoint /usr/bin/test -v addons-090979:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -d /var/lib: (1.360500368s)
	I0908 12:33:54.824882  561600 oci.go:107] Successfully prepared a docker volume addons-090979
	I0908 12:33:54.824915  561600 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:33:54.824934  561600 kic.go:194] Starting extracting preloaded images to volume ...
	I0908 12:33:54.825001  561600 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21508-558996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-090979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir
	I0908 12:33:59.044413  561600 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21508-558996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-090979:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 -I lz4 -xf /preloaded.tar -C /extractDir: (4.219376223s)
	I0908 12:33:59.044448  561600 kic.go:203] duration metric: took 4.219509163s to extract preloaded images to volume ...
	W0908 12:33:59.044606  561600 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0908 12:33:59.044745  561600 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0908 12:33:59.108114  561600 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-090979 --name addons-090979 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-090979 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-090979 --network addons-090979 --ip 192.168.49.2 --volume addons-090979:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79
	I0908 12:33:59.394711  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Running}}
	I0908 12:33:59.418852  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:33:59.454169  561600 cli_runner.go:164] Run: docker exec addons-090979 stat /var/lib/dpkg/alternatives/iptables
	I0908 12:33:59.514334  561600 oci.go:144] the created container "addons-090979" has a running status.
	I0908 12:33:59.514364  561600 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa...
	I0908 12:34:00.917332  561600 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0908 12:34:00.936699  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:00.952584  561600 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0908 12:34:00.952608  561600 kic_runner.go:114] Args: [docker exec --privileged addons-090979 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0908 12:34:00.994427  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:01.012608  561600 machine.go:93] provisionDockerMachine start ...
	I0908 12:34:01.012711  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:01.029587  561600 main.go:141] libmachine: Using SSH client type: native
	I0908 12:34:01.029943  561600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0908 12:34:01.029959  561600 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:34:01.157572  561600 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-090979
	
	I0908 12:34:01.157602  561600 ubuntu.go:182] provisioning hostname "addons-090979"
	I0908 12:34:01.157689  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:01.177801  561600 main.go:141] libmachine: Using SSH client type: native
	I0908 12:34:01.178116  561600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0908 12:34:01.178137  561600 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-090979 && echo "addons-090979" | sudo tee /etc/hostname
	I0908 12:34:01.314808  561600 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-090979
	
	I0908 12:34:01.314901  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:01.336140  561600 main.go:141] libmachine: Using SSH client type: native
	I0908 12:34:01.336459  561600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0908 12:34:01.336476  561600 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-090979' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-090979/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-090979' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:34:01.462098  561600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:34:01.462129  561600 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-558996/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-558996/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-558996/.minikube}
	I0908 12:34:01.462153  561600 ubuntu.go:190] setting up certificates
	I0908 12:34:01.462163  561600 provision.go:84] configureAuth start
	I0908 12:34:01.462225  561600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-090979
	I0908 12:34:01.480004  561600 provision.go:143] copyHostCerts
	I0908 12:34:01.480096  561600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-558996/.minikube/ca.pem (1082 bytes)
	I0908 12:34:01.480226  561600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-558996/.minikube/cert.pem (1123 bytes)
	I0908 12:34:01.480297  561600 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-558996/.minikube/key.pem (1675 bytes)
	I0908 12:34:01.480358  561600 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-558996/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca-key.pem org=jenkins.addons-090979 san=[127.0.0.1 192.168.49.2 addons-090979 localhost minikube]
	I0908 12:34:01.789499  561600 provision.go:177] copyRemoteCerts
	I0908 12:34:01.789571  561600 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:34:01.789613  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:01.809991  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:01.898836  561600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0908 12:34:01.923784  561600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 12:34:01.947704  561600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:34:01.972485  561600 provision.go:87] duration metric: took 510.294397ms to configureAuth
	I0908 12:34:01.972516  561600 ubuntu.go:206] setting minikube options for container-runtime
	I0908 12:34:01.972705  561600 config.go:182] Loaded profile config "addons-090979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:34:01.972827  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:01.996873  561600 main.go:141] libmachine: Using SSH client type: native
	I0908 12:34:01.997196  561600 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I0908 12:34:01.997219  561600 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 12:34:02.228211  561600 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 12:34:02.228233  561600 machine.go:96] duration metric: took 1.215599455s to provisionDockerMachine
	I0908 12:34:02.228243  561600 client.go:171] duration metric: took 10.032380972s to LocalClient.Create
	I0908 12:34:02.228266  561600 start.go:167] duration metric: took 10.032481109s to libmachine.API.Create "addons-090979"
	I0908 12:34:02.228275  561600 start.go:293] postStartSetup for "addons-090979" (driver="docker")
	I0908 12:34:02.228285  561600 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:34:02.228355  561600 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:34:02.228400  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:02.246825  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:02.343639  561600 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:34:02.346985  561600 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 12:34:02.347022  561600 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 12:34:02.347033  561600 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 12:34:02.347041  561600 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 12:34:02.347052  561600 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-558996/.minikube/addons for local assets ...
	I0908 12:34:02.347131  561600 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-558996/.minikube/files for local assets ...
	I0908 12:34:02.347160  561600 start.go:296] duration metric: took 118.879673ms for postStartSetup
	I0908 12:34:02.347492  561600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-090979
	I0908 12:34:02.365138  561600 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/config.json ...
	I0908 12:34:02.365545  561600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:34:02.365617  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:02.383401  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:02.471127  561600 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 12:34:02.475753  561600 start.go:128] duration metric: took 10.283901566s to createHost
	I0908 12:34:02.475826  561600 start.go:83] releasing machines lock for "addons-090979", held for 10.284081062s
	I0908 12:34:02.475907  561600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-090979
	I0908 12:34:02.493384  561600 ssh_runner.go:195] Run: cat /version.json
	I0908 12:34:02.493408  561600 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 12:34:02.493439  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:02.493489  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:02.518142  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:02.523695  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:02.751825  561600 ssh_runner.go:195] Run: systemctl --version
	I0908 12:34:02.756138  561600 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 12:34:02.896249  561600 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:34:02.900568  561600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:34:02.922857  561600 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 12:34:02.922950  561600 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:34:02.957660  561600 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0908 12:34:02.957686  561600 start.go:495] detecting cgroup driver to use...
	I0908 12:34:02.957721  561600 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:34:02.957858  561600 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:34:02.974752  561600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:34:02.989112  561600 docker.go:218] disabling cri-docker service (if available) ...
	I0908 12:34:02.989195  561600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 12:34:03.006732  561600 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 12:34:03.021033  561600 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 12:34:03.112522  561600 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 12:34:03.210936  561600 docker.go:234] disabling docker service ...
	I0908 12:34:03.211007  561600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 12:34:03.230848  561600 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 12:34:03.242723  561600 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 12:34:03.334852  561600 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 12:34:03.429864  561600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:34:03.441135  561600 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:34:03.458737  561600 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 12:34:03.458811  561600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:34:03.468839  561600 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 12:34:03.468960  561600 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:34:03.479340  561600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:34:03.488860  561600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:34:03.498546  561600 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:34:03.507835  561600 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:34:03.517529  561600 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:34:03.533225  561600 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:34:03.542809  561600 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:34:03.551281  561600 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:34:03.559899  561600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:34:03.637438  561600 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 12:34:03.748003  561600 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 12:34:03.748092  561600 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 12:34:03.751994  561600 start.go:563] Will wait 60s for crictl version
	I0908 12:34:03.752101  561600 ssh_runner.go:195] Run: which crictl
	I0908 12:34:03.756418  561600 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:34:03.793568  561600 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 12:34:03.793713  561600 ssh_runner.go:195] Run: crio --version
	I0908 12:34:03.831865  561600 ssh_runner.go:195] Run: crio --version
	I0908 12:34:03.879429  561600 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 12:34:03.882344  561600 cli_runner.go:164] Run: docker network inspect addons-090979 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:34:03.899370  561600 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 12:34:03.903192  561600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:34:03.914280  561600 kubeadm.go:875] updating cluster {Name:addons-090979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-090979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:34:03.914402  561600 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:34:03.914472  561600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:34:03.998806  561600 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:34:03.998833  561600 crio.go:433] Images already preloaded, skipping extraction
	I0908 12:34:03.998892  561600 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:34:04.040514  561600 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:34:04.040539  561600 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:34:04.040548  561600 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0908 12:34:04.040639  561600 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-090979 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-090979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:34:04.040723  561600 ssh_runner.go:195] Run: crio config
	I0908 12:34:04.089385  561600 cni.go:84] Creating CNI manager for ""
	I0908 12:34:04.089409  561600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:34:04.089419  561600 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:34:04.089440  561600 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-090979 NodeName:addons-090979 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:34:04.089566  561600 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-090979"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:34:04.089638  561600 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:34:04.098433  561600 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:34:04.098505  561600 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:34:04.107168  561600 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0908 12:34:04.125305  561600 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:34:04.143395  561600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0908 12:34:04.161627  561600 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 12:34:04.165048  561600 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0908 12:34:04.176051  561600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:34:04.269974  561600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:34:04.285152  561600 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979 for IP: 192.168.49.2
	I0908 12:34:04.285186  561600 certs.go:194] generating shared ca certs ...
	I0908 12:34:04.285204  561600 certs.go:226] acquiring lock for ca certs: {Name:mk0ff9e19e9952011d1b6ccb4c93c3f59626ecb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:34:04.285382  561600 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-558996/.minikube/ca.key
	I0908 12:34:05.663153  561600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-558996/.minikube/ca.crt ...
	I0908 12:34:05.663187  561600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-558996/.minikube/ca.crt: {Name:mk5bc59a2f51eb047784ca48017746299755193f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:34:05.663368  561600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-558996/.minikube/ca.key ...
	I0908 12:34:05.663384  561600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-558996/.minikube/ca.key: {Name:mke3bc3280257dd684cf7b4ea52acc414cf0676a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:34:05.663468  561600 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-558996/.minikube/proxy-client-ca.key
	I0908 12:34:05.896638  561600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-558996/.minikube/proxy-client-ca.crt ...
	I0908 12:34:05.896669  561600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-558996/.minikube/proxy-client-ca.crt: {Name:mk69264d88eaeb60805ea52d0c796ef0227deea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:34:05.896858  561600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-558996/.minikube/proxy-client-ca.key ...
	I0908 12:34:05.896873  561600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-558996/.minikube/proxy-client-ca.key: {Name:mkb9494957c82a902363a2743b04926adf03a9ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:34:05.896956  561600 certs.go:256] generating profile certs ...
	I0908 12:34:05.897018  561600 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.key
	I0908 12:34:05.897038  561600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt with IP's: []
	I0908 12:34:06.684288  561600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt ...
	I0908 12:34:06.684323  561600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: {Name:mk8a4e58dc23b3b4e9780b8dd6ff021a672e91d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:34:06.684503  561600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.key ...
	I0908 12:34:06.684519  561600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.key: {Name:mk47a00b53c4d4a425cc6f8cc45ae29cbe9f6f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:34:06.684610  561600 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/apiserver.key.69ccd44c
	I0908 12:34:06.684632  561600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/apiserver.crt.69ccd44c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0908 12:34:07.382840  561600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/apiserver.crt.69ccd44c ...
	I0908 12:34:07.382877  561600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/apiserver.crt.69ccd44c: {Name:mk298f536b3f47cef04765a3fbfa98dcff5f775f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:34:07.383075  561600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/apiserver.key.69ccd44c ...
	I0908 12:34:07.383094  561600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/apiserver.key.69ccd44c: {Name:mkf1692cd14f80975bd6e69c6721586df1a1a9f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:34:07.383184  561600 certs.go:381] copying /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/apiserver.crt.69ccd44c -> /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/apiserver.crt
	I0908 12:34:07.383269  561600 certs.go:385] copying /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/apiserver.key.69ccd44c -> /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/apiserver.key
	I0908 12:34:07.383327  561600 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/proxy-client.key
	I0908 12:34:07.383345  561600 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/proxy-client.crt with IP's: []
	I0908 12:34:08.164902  561600 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/proxy-client.crt ...
	I0908 12:34:08.164936  561600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/proxy-client.crt: {Name:mkc7689c475af6a4e34c7cbd46f1baa6a2e8d02d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:34:08.165125  561600 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/proxy-client.key ...
	I0908 12:34:08.165140  561600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/proxy-client.key: {Name:mkd4a8c17f7224abd374e2b5d171dd6436710572 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:34:08.165330  561600 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 12:34:08.165374  561600 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca.pem (1082 bytes)
	I0908 12:34:08.165407  561600 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/cert.pem (1123 bytes)
	I0908 12:34:08.165434  561600 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/key.pem (1675 bytes)
	I0908 12:34:08.166067  561600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:34:08.192168  561600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 12:34:08.216369  561600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:34:08.240672  561600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 12:34:08.264629  561600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0908 12:34:08.288915  561600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 12:34:08.312788  561600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:34:08.336778  561600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0908 12:34:08.360863  561600 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:34:08.385592  561600 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:34:08.404055  561600 ssh_runner.go:195] Run: openssl version
	I0908 12:34:08.409921  561600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:34:08.420096  561600 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:34:08.423837  561600 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 12:34 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:34:08.423901  561600 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:34:08.430911  561600 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:34:08.440393  561600 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:34:08.443802  561600 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0908 12:34:08.443852  561600 kubeadm.go:392] StartCluster: {Name:addons-090979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-090979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:34:08.443926  561600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 12:34:08.443988  561600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 12:34:08.481160  561600 cri.go:89] found id: ""
	I0908 12:34:08.481230  561600 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0908 12:34:08.490183  561600 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0908 12:34:08.499238  561600 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0908 12:34:08.499375  561600 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0908 12:34:08.508285  561600 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0908 12:34:08.508304  561600 kubeadm.go:157] found existing configuration files:
	
	I0908 12:34:08.508378  561600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0908 12:34:08.517178  561600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0908 12:34:08.517273  561600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0908 12:34:08.526114  561600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0908 12:34:08.535099  561600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0908 12:34:08.535215  561600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0908 12:34:08.544054  561600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0908 12:34:08.552744  561600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0908 12:34:08.552834  561600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0908 12:34:08.561664  561600 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0908 12:34:08.570642  561600 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0908 12:34:08.570716  561600 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0908 12:34:08.578958  561600 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0908 12:34:08.637847  561600 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0908 12:34:08.638328  561600 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0908 12:34:08.714508  561600 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0908 12:34:22.193621  561600 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0908 12:34:22.193683  561600 kubeadm.go:310] [preflight] Running pre-flight checks
	I0908 12:34:22.193804  561600 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0908 12:34:22.193865  561600 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0908 12:34:22.193905  561600 kubeadm.go:310] OS: Linux
	I0908 12:34:22.193963  561600 kubeadm.go:310] CGROUPS_CPU: enabled
	I0908 12:34:22.194019  561600 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0908 12:34:22.194073  561600 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0908 12:34:22.194128  561600 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0908 12:34:22.194187  561600 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0908 12:34:22.194240  561600 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0908 12:34:22.194290  561600 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0908 12:34:22.194348  561600 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0908 12:34:22.194399  561600 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0908 12:34:22.194475  561600 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0908 12:34:22.194573  561600 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0908 12:34:22.194667  561600 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0908 12:34:22.194732  561600 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0908 12:34:22.198079  561600 out.go:252]   - Generating certificates and keys ...
	I0908 12:34:22.198181  561600 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0908 12:34:22.198254  561600 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0908 12:34:22.198330  561600 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0908 12:34:22.198395  561600 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0908 12:34:22.198470  561600 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0908 12:34:22.198529  561600 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0908 12:34:22.198588  561600 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0908 12:34:22.198713  561600 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-090979 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 12:34:22.198769  561600 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0908 12:34:22.198893  561600 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-090979 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0908 12:34:22.198964  561600 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0908 12:34:22.199031  561600 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0908 12:34:22.199214  561600 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0908 12:34:22.199283  561600 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0908 12:34:22.199336  561600 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0908 12:34:22.199402  561600 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0908 12:34:22.199462  561600 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0908 12:34:22.199531  561600 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0908 12:34:22.199592  561600 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0908 12:34:22.199681  561600 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0908 12:34:22.199752  561600 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0908 12:34:22.202819  561600 out.go:252]   - Booting up control plane ...
	I0908 12:34:22.203066  561600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0908 12:34:22.203168  561600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0908 12:34:22.203289  561600 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0908 12:34:22.203417  561600 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0908 12:34:22.203526  561600 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0908 12:34:22.203645  561600 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0908 12:34:22.203757  561600 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0908 12:34:22.203819  561600 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0908 12:34:22.203959  561600 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0908 12:34:22.204070  561600 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0908 12:34:22.204136  561600 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501599479s
	I0908 12:34:22.204236  561600 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0908 12:34:22.204327  561600 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0908 12:34:22.204433  561600 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0908 12:34:22.204526  561600 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0908 12:34:22.204608  561600 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.72813733s
	I0908 12:34:22.204688  561600 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 4.222977251s
	I0908 12:34:22.204764  561600 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.00127236s
	I0908 12:34:22.204892  561600 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0908 12:34:22.205027  561600 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0908 12:34:22.205091  561600 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0908 12:34:22.205311  561600 kubeadm.go:310] [mark-control-plane] Marking the node addons-090979 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0908 12:34:22.205374  561600 kubeadm.go:310] [bootstrap-token] Using token: 7s07gv.f4suob0q6iho4020
	I0908 12:34:22.208347  561600 out.go:252]   - Configuring RBAC rules ...
	I0908 12:34:22.208483  561600 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0908 12:34:22.208575  561600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0908 12:34:22.208724  561600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0908 12:34:22.208857  561600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0908 12:34:22.208978  561600 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0908 12:34:22.209070  561600 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0908 12:34:22.209190  561600 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0908 12:34:22.209239  561600 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0908 12:34:22.209289  561600 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0908 12:34:22.209301  561600 kubeadm.go:310] 
	I0908 12:34:22.209362  561600 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0908 12:34:22.209372  561600 kubeadm.go:310] 
	I0908 12:34:22.209451  561600 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0908 12:34:22.209460  561600 kubeadm.go:310] 
	I0908 12:34:22.209486  561600 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0908 12:34:22.209546  561600 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0908 12:34:22.209601  561600 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0908 12:34:22.209610  561600 kubeadm.go:310] 
	I0908 12:34:22.209671  561600 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0908 12:34:22.209679  561600 kubeadm.go:310] 
	I0908 12:34:22.209728  561600 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0908 12:34:22.209741  561600 kubeadm.go:310] 
	I0908 12:34:22.209949  561600 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0908 12:34:22.210091  561600 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0908 12:34:22.210200  561600 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0908 12:34:22.210242  561600 kubeadm.go:310] 
	I0908 12:34:22.210371  561600 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0908 12:34:22.210487  561600 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0908 12:34:22.210528  561600 kubeadm.go:310] 
	I0908 12:34:22.210669  561600 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7s07gv.f4suob0q6iho4020 \
	I0908 12:34:22.210788  561600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:23c6a0eea1efaca448428059525ae3967501287f8dc3b0d99bdeff58fc4b52fb \
	I0908 12:34:22.210811  561600 kubeadm.go:310] 	--control-plane 
	I0908 12:34:22.210816  561600 kubeadm.go:310] 
	I0908 12:34:22.210916  561600 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0908 12:34:22.210921  561600 kubeadm.go:310] 
	I0908 12:34:22.211017  561600 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7s07gv.f4suob0q6iho4020 \
	I0908 12:34:22.211164  561600 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:23c6a0eea1efaca448428059525ae3967501287f8dc3b0d99bdeff58fc4b52fb 
	I0908 12:34:22.211174  561600 cni.go:84] Creating CNI manager for ""
	I0908 12:34:22.211181  561600 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:34:22.214727  561600 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0908 12:34:22.217802  561600 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0908 12:34:22.222818  561600 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0908 12:34:22.222850  561600 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0908 12:34:22.245511  561600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0908 12:34:22.578231  561600 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0908 12:34:22.578365  561600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:34:22.578409  561600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-090979 minikube.k8s.io/updated_at=2025_09_08T12_34_22_0700 minikube.k8s.io/version=v1.36.0 minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba minikube.k8s.io/name=addons-090979 minikube.k8s.io/primary=true
	I0908 12:34:22.769672  561600 ops.go:34] apiserver oom_adj: -16
	I0908 12:34:22.769849  561600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:34:23.270843  561600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:34:23.770534  561600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:34:24.270769  561600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:34:24.770587  561600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:34:25.270384  561600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:34:25.770480  561600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:34:26.269938  561600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:34:26.769939  561600 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0908 12:34:26.873630  561600 kubeadm.go:1105] duration metric: took 4.295317451s to wait for elevateKubeSystemPrivileges
	I0908 12:34:26.873665  561600 kubeadm.go:394] duration metric: took 18.429817022s to StartCluster
	I0908 12:34:26.873683  561600 settings.go:142] acquiring lock: {Name:mk228a8fe00d572c8dbba4dfcbf398931a86fc6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:34:26.873819  561600 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21508-558996/kubeconfig
	I0908 12:34:26.874239  561600 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-558996/kubeconfig: {Name:mkf6991a6d647bd18fb424354e6cabd8d2baaa9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:34:26.874426  561600 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0908 12:34:26.874574  561600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0908 12:34:26.874815  561600 config.go:182] Loaded profile config "addons-090979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:34:26.874865  561600 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0908 12:34:26.874941  561600 addons.go:69] Setting yakd=true in profile "addons-090979"
	I0908 12:34:26.874958  561600 addons.go:238] Setting addon yakd=true in "addons-090979"
	I0908 12:34:26.874983  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:26.875423  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:26.875974  561600 addons.go:69] Setting metrics-server=true in profile "addons-090979"
	I0908 12:34:26.875997  561600 addons.go:238] Setting addon metrics-server=true in "addons-090979"
	I0908 12:34:26.876020  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:26.876429  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:26.877230  561600 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-090979"
	I0908 12:34:26.879850  561600 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-090979"
	I0908 12:34:26.879936  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:26.879688  561600 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-090979"
	I0908 12:34:26.880661  561600 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-090979"
	I0908 12:34:26.880712  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:26.881169  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:26.881805  561600 addons.go:69] Setting cloud-spanner=true in profile "addons-090979"
	I0908 12:34:26.881869  561600 addons.go:238] Setting addon cloud-spanner=true in "addons-090979"
	I0908 12:34:26.881909  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:26.882380  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:26.879698  561600 addons.go:69] Setting registry=true in profile "addons-090979"
	I0908 12:34:26.885504  561600 addons.go:238] Setting addon registry=true in "addons-090979"
	I0908 12:34:26.885537  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:26.886007  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:26.891127  561600 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-090979"
	I0908 12:34:26.891271  561600 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-090979"
	I0908 12:34:26.891342  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:26.891935  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:26.879707  561600 addons.go:69] Setting registry-creds=true in profile "addons-090979"
	I0908 12:34:26.879710  561600 addons.go:69] Setting storage-provisioner=true in profile "addons-090979"
	I0908 12:34:26.893856  561600 addons.go:238] Setting addon storage-provisioner=true in "addons-090979"
	I0908 12:34:26.893890  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:26.879714  561600 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-090979"
	I0908 12:34:26.894334  561600 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-090979"
	I0908 12:34:26.879717  561600 addons.go:69] Setting volcano=true in profile "addons-090979"
	I0908 12:34:26.894558  561600 addons.go:238] Setting addon volcano=true in "addons-090979"
	I0908 12:34:26.879720  561600 addons.go:69] Setting volumesnapshots=true in profile "addons-090979"
	I0908 12:34:26.894624  561600 addons.go:238] Setting addon volumesnapshots=true in "addons-090979"
	I0908 12:34:26.894646  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:26.895083  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:26.904808  561600 addons.go:69] Setting default-storageclass=true in profile "addons-090979"
	I0908 12:34:26.904841  561600 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-090979"
	I0908 12:34:26.905173  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:26.909008  561600 out.go:179] * Verifying Kubernetes components...
	I0908 12:34:26.922408  561600 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:34:26.922924  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:26.926836  561600 addons.go:69] Setting gcp-auth=true in profile "addons-090979"
	I0908 12:34:26.926881  561600 mustload.go:65] Loading cluster: addons-090979
	I0908 12:34:26.927082  561600 config.go:182] Loaded profile config "addons-090979": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:34:26.927341  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:26.893765  561600 addons.go:238] Setting addon registry-creds=true in "addons-090979"
	I0908 12:34:26.938226  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:26.954508  561600 addons.go:69] Setting ingress=true in profile "addons-090979"
	I0908 12:34:26.954598  561600 addons.go:238] Setting addon ingress=true in "addons-090979"
	I0908 12:34:26.954676  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:26.957502  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:26.963269  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:26.978722  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:26.984135  561600 addons.go:69] Setting ingress-dns=true in profile "addons-090979"
	I0908 12:34:26.984188  561600 addons.go:238] Setting addon ingress-dns=true in "addons-090979"
	I0908 12:34:26.984235  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:26.984727  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:27.009292  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:27.009890  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:27.024523  561600 addons.go:69] Setting inspektor-gadget=true in profile "addons-090979"
	I0908 12:34:27.024616  561600 addons.go:238] Setting addon inspektor-gadget=true in "addons-090979"
	I0908 12:34:27.024674  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:27.025162  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:27.061945  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:27.093927  561600 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.40
	I0908 12:34:27.097231  561600 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0908 12:34:27.101287  561600 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0908 12:34:27.101312  561600 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0908 12:34:27.101390  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.119535  561600 addons.go:238] Setting addon default-storageclass=true in "addons-090979"
	I0908 12:34:27.119581  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:27.122165  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:27.149937  561600 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0908 12:34:27.153208  561600 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0908 12:34:27.156169  561600 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0908 12:34:27.165280  561600 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0908 12:34:27.166264  561600 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0908 12:34:27.166286  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0908 12:34:27.166355  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.190670  561600 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0908 12:34:27.196235  561600 out.go:179]   - Using image docker.io/registry:3.0.0
	I0908 12:34:27.202167  561600 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0908 12:34:27.226337  561600 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 12:34:27.229979  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0908 12:34:27.230115  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.203694  561600 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0908 12:34:27.204283  561600 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0908 12:34:27.232610  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0908 12:34:27.232697  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.243047  561600 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0908 12:34:27.250907  561600 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0908 12:34:27.250951  561600 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0908 12:34:27.250967  561600 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0908 12:34:27.251042  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.226560  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:27.203683  561600 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0908 12:34:27.257930  561600 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 12:34:27.257955  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0908 12:34:27.258021  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	W0908 12:34:27.270295  561600 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0908 12:34:27.271246  561600 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-090979"
	I0908 12:34:27.271284  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:27.271683  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:27.284783  561600 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0908 12:34:27.288041  561600 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 12:34:27.288066  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0908 12:34:27.288132  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.292776  561600 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:34:27.292798  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0908 12:34:27.292864  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.315629  561600 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0908 12:34:27.315651  561600 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0908 12:34:27.315718  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.323885  561600 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.0
	I0908 12:34:27.330609  561600 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0908 12:34:27.330636  561600 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0908 12:34:27.330712  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.340130  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.344759  561600 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0908 12:34:27.347957  561600 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 12:34:27.348128  561600 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0908 12:34:27.348139  561600 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0908 12:34:27.348198  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.371025  561600 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0908 12:34:27.375850  561600 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0908 12:34:27.375983  561600 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0908 12:34:27.382144  561600 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 12:34:27.389745  561600 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 12:34:27.389762  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0908 12:34:27.389847  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.390010  561600 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0908 12:34:27.390320  561600 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 12:34:27.390373  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0908 12:34:27.390451  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.406833  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.412646  561600 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0908 12:34:27.415465  561600 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0908 12:34:27.415492  561600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0908 12:34:27.415566  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.418234  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.472474  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.544466  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.555132  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.555882  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.561043  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.581364  561600 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0908 12:34:27.584612  561600 out.go:179]   - Using image docker.io/busybox:stable
	I0908 12:34:27.591736  561600 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 12:34:27.591761  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0908 12:34:27.591829  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:27.606246  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.614216  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.642103  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.642594  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.643525  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.655367  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.656397  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:27.718038  561600 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0908 12:34:27.718114  561600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0908 12:34:27.754553  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0908 12:34:27.767168  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0908 12:34:27.880936  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0908 12:34:27.927743  561600 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0908 12:34:27.927764  561600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0908 12:34:27.947051  561600 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.024606193s)
	I0908 12:34:27.947113  561600 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:34:27.947172  561600 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.072578697s)
	I0908 12:34:27.947296  561600 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0908 12:34:27.957241  561600 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0908 12:34:27.957313  561600 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0908 12:34:28.037604  561600 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0908 12:34:28.037679  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0908 12:34:28.059110  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0908 12:34:28.061500  561600 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0908 12:34:28.061572  561600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0908 12:34:28.087526  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0908 12:34:28.098141  561600 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0908 12:34:28.098213  561600 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0908 12:34:28.101731  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0908 12:34:28.108942  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0908 12:34:28.149591  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0908 12:34:28.163891  561600 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0908 12:34:28.163913  561600 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0908 12:34:28.182309  561600 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0908 12:34:28.182332  561600 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0908 12:34:28.185699  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0908 12:34:28.192042  561600 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0908 12:34:28.192065  561600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0908 12:34:28.229226  561600 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0908 12:34:28.229300  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0908 12:34:28.292185  561600 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:34:28.292262  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0908 12:34:28.294358  561600 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0908 12:34:28.294428  561600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0908 12:34:28.387885  561600 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0908 12:34:28.387962  561600 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0908 12:34:28.398769  561600 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0908 12:34:28.398852  561600 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0908 12:34:28.436312  561600 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:34:28.436387  561600 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0908 12:34:28.457913  561600 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0908 12:34:28.457994  561600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0908 12:34:28.468851  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0908 12:34:28.540584  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:34:28.595838  561600 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 12:34:28.595910  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0908 12:34:28.616098  561600 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0908 12:34:28.616181  561600 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0908 12:34:28.624554  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0908 12:34:28.726137  561600 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0908 12:34:28.726215  561600 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0908 12:34:28.757059  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0908 12:34:28.851048  561600 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0908 12:34:28.851121  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0908 12:34:28.885323  561600 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0908 12:34:28.885393  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0908 12:34:28.966464  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0908 12:34:28.998833  561600 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0908 12:34:28.998927  561600 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0908 12:34:29.064030  561600 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0908 12:34:29.064105  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0908 12:34:29.132435  561600 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0908 12:34:29.132504  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0908 12:34:29.218851  561600 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 12:34:29.218926  561600 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0908 12:34:29.261731  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0908 12:34:30.740017  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.985428569s)
	I0908 12:34:31.404661  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.637407656s)
	I0908 12:34:31.404748  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.523741656s)
	I0908 12:34:31.404810  561600 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.457503083s)
	I0908 12:34:31.404828  561600 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0908 12:34:31.405796  561600 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.458651299s)
	I0908 12:34:31.406547  561600 node_ready.go:35] waiting up to 6m0s for node "addons-090979" to be "Ready" ...
	I0908 12:34:31.891082  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.831889604s)
	I0908 12:34:31.891202  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.803604685s)
	I0908 12:34:31.928370  561600 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-090979" context rescaled to 1 replicas
	I0908 12:34:32.853115  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.744087411s)
	I0908 12:34:32.853201  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.703541002s)
	I0908 12:34:32.853236  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.751414388s)
	I0908 12:34:32.853248  561600 addons.go:479] Verifying addon ingress=true in "addons-090979"
	I0908 12:34:32.853263  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.667538531s)
	I0908 12:34:32.853470  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.384541815s)
	I0908 12:34:32.853482  561600 addons.go:479] Verifying addon registry=true in "addons-090979"
	I0908 12:34:32.854018  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.313347725s)
	W0908 12:34:32.854050  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:32.854089  561600 retry.go:31] will retry after 353.790708ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:32.854166  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.229537756s)
	I0908 12:34:32.854176  561600 addons.go:479] Verifying addon metrics-server=true in "addons-090979"
	I0908 12:34:32.856496  561600 out.go:179] * Verifying ingress addon...
	I0908 12:34:32.858563  561600 out.go:179] * Verifying registry addon...
	I0908 12:34:32.861332  561600 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0908 12:34:32.863352  561600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0908 12:34:32.879333  561600 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 12:34:32.879409  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:32.879716  561600 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0908 12:34:32.879768  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:32.998502  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.241347854s)
	I0908 12:34:32.998612  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.032071906s)
	W0908 12:34:32.998541  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 12:34:32.998855  561600 retry.go:31] will retry after 374.095618ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0908 12:34:33.001949  561600 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-090979 service yakd-dashboard -n yakd-dashboard
	
	I0908 12:34:33.208473  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:34:33.260687  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.998772377s)
	I0908 12:34:33.260724  561600 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-090979"
	I0908 12:34:33.263623  561600 out.go:179] * Verifying csi-hostpath-driver addon...
	I0908 12:34:33.267212  561600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0908 12:34:33.276057  561600 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 12:34:33.276082  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:33.368413  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:33.368877  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:33.373178  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0908 12:34:33.411244  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:34:33.775449  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:33.865698  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:33.868467  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:34:34.202523  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:34.202556  561600 retry.go:31] will retry after 420.444708ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:34.270592  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:34.370766  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:34.371083  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:34.623687  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:34:34.771651  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:34.865586  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:34.867517  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:35.271371  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:35.373369  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:35.373768  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:35.771333  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:35.879730  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:35.880164  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 12:34:35.914935  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:34:36.188389  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.815153662s)
	I0908 12:34:36.188509  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.564794041s)
	W0908 12:34:36.188581  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:36.188606  561600 retry.go:31] will retry after 541.462003ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:36.271022  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:36.372364  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:36.372612  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:36.731204  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:34:36.774146  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:36.866388  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:36.866422  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:37.271043  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:37.348976  561600 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0908 12:34:37.349055  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:37.372643  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:37.372708  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:37.372797  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	I0908 12:34:37.491210  561600 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0908 12:34:37.514093  561600 addons.go:238] Setting addon gcp-auth=true in "addons-090979"
	I0908 12:34:37.514199  561600 host.go:66] Checking if "addons-090979" exists ...
	I0908 12:34:37.514729  561600 cli_runner.go:164] Run: docker container inspect addons-090979 --format={{.State.Status}}
	I0908 12:34:37.535173  561600 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0908 12:34:37.535232  561600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-090979
	I0908 12:34:37.554509  561600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/addons-090979/id_rsa Username:docker}
	W0908 12:34:37.606933  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:37.606982  561600 retry.go:31] will retry after 1.257934409s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:37.656634  561600 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0908 12:34:37.659408  561600 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0908 12:34:37.662352  561600 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0908 12:34:37.662386  561600 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0908 12:34:37.681252  561600 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0908 12:34:37.681285  561600 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0908 12:34:37.711351  561600 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 12:34:37.711375  561600 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0908 12:34:37.730564  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0908 12:34:37.770376  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:37.865069  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:37.867668  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:38.219578  561600 addons.go:479] Verifying addon gcp-auth=true in "addons-090979"
	I0908 12:34:38.222528  561600 out.go:179] * Verifying gcp-auth addon...
	I0908 12:34:38.226337  561600 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0908 12:34:38.234760  561600 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0908 12:34:38.234786  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:38.334006  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:38.365667  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:38.366966  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:34:38.411067  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:34:38.730479  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:38.770488  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:38.864508  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:38.865509  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:34:38.867596  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:39.229907  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:39.271273  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:39.364946  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:39.367378  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:34:39.694939  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:39.694979  561600 retry.go:31] will retry after 1.609694786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:39.730982  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:39.770799  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:39.864999  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:39.867420  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:40.229760  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:40.270410  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:40.364567  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:40.366346  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:40.729913  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:40.770617  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:40.865963  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:40.866426  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:34:40.910249  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:34:41.229267  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:41.270770  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:41.305113  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:34:41.369702  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:41.370089  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:41.729484  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:41.772476  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:41.867591  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:41.868498  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 12:34:42.113363  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:42.113403  561600 retry.go:31] will retry after 989.850315ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:42.230052  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:42.271103  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:42.365337  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:42.366926  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:42.729995  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:42.770907  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:42.864774  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:42.866370  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:43.104236  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:34:43.229073  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:43.271303  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:43.364449  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:43.367050  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:34:43.410176  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:34:43.730846  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:43.772673  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:43.866406  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:43.867700  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:34:43.925622  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:43.925657  561600 retry.go:31] will retry after 1.873385022s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:44.230062  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:44.270862  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:44.366088  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:44.366871  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:44.729741  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:44.770617  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:44.864767  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:44.866847  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:45.231738  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:45.273990  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:45.369900  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:45.370645  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 12:34:45.410542  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:34:45.729609  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:45.770449  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:45.799757  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:34:45.871947  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:45.872313  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:46.230227  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:46.270866  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:46.365159  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:46.374108  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:34:46.620462  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:46.620498  561600 retry.go:31] will retry after 5.720574035s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:46.729822  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:46.771068  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:46.865683  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:46.865984  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:47.229053  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:47.270793  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:47.365018  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:47.367310  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:47.729857  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:47.771092  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:47.865050  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:47.866724  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:34:47.910185  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:34:48.229420  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:48.270374  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:48.364473  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:48.366439  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:48.730352  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:48.771064  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:48.864944  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:48.867186  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:49.229588  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:49.270486  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:49.364367  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:49.366301  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:49.729223  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:49.771118  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:49.865134  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:49.867597  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:34:49.910532  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:34:50.229739  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:50.270513  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:50.365860  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:50.367326  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:50.730247  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:50.770034  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:50.867544  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:50.867815  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:51.229157  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:51.270134  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:51.365402  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:51.366642  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:51.729575  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:51.770365  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:51.864505  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:51.866628  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:52.230321  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:52.270625  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:52.341959  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:34:52.368586  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:52.369500  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 12:34:52.410616  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:34:52.729491  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:52.770718  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:52.867130  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:52.868021  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 12:34:53.160521  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:53.160594  561600 retry.go:31] will retry after 3.835097101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:53.229312  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:53.270146  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:53.365079  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:53.366492  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:53.729386  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:53.770445  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:53.864497  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:53.866815  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:54.229866  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:54.270538  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:54.364897  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:54.367383  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:54.729494  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:54.770121  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:54.865301  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:54.866524  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:34:54.909313  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:34:55.229375  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:55.270256  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:55.366215  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:55.366361  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:55.729112  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:55.771113  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:55.866539  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:55.866944  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:56.229346  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:56.272851  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:56.364890  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:56.366892  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:56.729892  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:56.770522  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:56.864508  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:56.866391  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:34:56.910012  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:34:56.996438  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:34:57.230188  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:57.270659  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:57.364977  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:57.368821  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:57.729845  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:57.770931  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0908 12:34:57.809264  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:57.809350  561600 retry.go:31] will retry after 9.359119279s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:34:57.866410  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:57.866454  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:58.229647  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:58.270959  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:58.364733  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:58.367142  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:58.729824  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:58.770680  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:58.864897  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:58.867286  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:34:58.910303  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:34:59.229506  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:59.270459  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:59.365060  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:59.366345  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:34:59.729349  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:34:59.769913  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:34:59.865592  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:34:59.867551  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:00.248692  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:00.287275  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:00.401579  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:00.404640  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:00.730468  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:00.770867  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:00.865688  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:00.867365  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:35:00.910387  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:35:01.230174  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:01.272985  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:01.365727  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:01.367718  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:01.729696  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:01.770402  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:01.865381  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:01.867600  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:02.229692  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:02.270566  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:02.365633  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:02.370693  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:02.729761  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:02.770559  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:02.864719  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:02.867017  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:03.230180  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:03.270213  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:03.367316  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:03.368015  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 12:35:03.409835  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:35:03.729906  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:03.771413  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:03.864740  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:03.866552  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:04.229481  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:04.270603  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:04.364631  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:04.366593  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:04.729286  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:04.771034  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:04.865228  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:04.866459  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:05.230083  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:05.271009  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:05.365037  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:05.366874  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:05.729888  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:05.770586  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:05.866380  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:05.868084  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:35:05.909620  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:35:06.230454  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:06.270766  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:06.366236  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:06.367306  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:06.731332  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:06.769888  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:06.869910  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:06.872725  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:07.168837  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:35:07.230242  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:07.271519  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:07.365473  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:07.367995  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:07.730160  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:07.772580  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:07.867899  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:07.868209  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0908 12:35:07.988491  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:35:07.988524  561600 retry.go:31] will retry after 8.558340985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:35:08.229429  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:08.269861  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:08.365170  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:08.367291  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0908 12:35:08.410174  561600 node_ready.go:57] node "addons-090979" has "Ready":"False" status (will retry)
	I0908 12:35:08.729451  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:08.770526  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:08.865839  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:08.867156  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:09.229157  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:09.271025  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:09.365432  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:09.366941  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:09.730191  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:09.770063  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:09.890108  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:09.904386  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:09.934912  561600 node_ready.go:49] node "addons-090979" is "Ready"
	I0908 12:35:09.934992  561600 node_ready.go:38] duration metric: took 38.5284182s for node "addons-090979" to be "Ready" ...
	I0908 12:35:09.935023  561600 api_server.go:52] waiting for apiserver process to appear ...
	I0908 12:35:09.935111  561600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:35:09.966496  561600 api_server.go:72] duration metric: took 43.092006725s to wait for apiserver process to appear ...
	I0908 12:35:09.966593  561600 api_server.go:88] waiting for apiserver healthz status ...
	I0908 12:35:09.966631  561600 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0908 12:35:09.976309  561600 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0908 12:35:09.977619  561600 api_server.go:141] control plane version: v1.34.0
	I0908 12:35:09.977692  561600 api_server.go:131] duration metric: took 11.076136ms to wait for apiserver health ...
	I0908 12:35:09.977717  561600 system_pods.go:43] waiting for kube-system pods to appear ...
	I0908 12:35:09.995307  561600 system_pods.go:59] 19 kube-system pods found
	I0908 12:35:09.995418  561600 system_pods.go:61] "coredns-66bc5c9577-fxjj6" [eb0abab8-f0b3-4c0b-b62c-c01110ecefd3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:35:09.995442  561600 system_pods.go:61] "csi-hostpath-attacher-0" [c311f5e7-213f-4003-bdc0-53b94380294b] Pending
	I0908 12:35:09.995480  561600 system_pods.go:61] "csi-hostpath-resizer-0" [8bcfa71c-57b3-4d1d-b4c7-3ef9469cfff4] Pending
	I0908 12:35:09.995511  561600 system_pods.go:61] "csi-hostpathplugin-m2nck" [61245166-7d10-480f-bd6f-31045ee52959] Pending
	I0908 12:35:09.995536  561600 system_pods.go:61] "etcd-addons-090979" [fcc2a589-2904-4d25-8262-fd9b2e746c19] Running
	I0908 12:35:09.995568  561600 system_pods.go:61] "kindnet-j2gn4" [205df6a6-8ed7-434e-bb95-de51d292e089] Running
	I0908 12:35:09.995588  561600 system_pods.go:61] "kube-apiserver-addons-090979" [d0b3ef68-ec86-45f8-9b10-728fec7da839] Running
	I0908 12:35:09.995611  561600 system_pods.go:61] "kube-controller-manager-addons-090979" [b7a65a9d-c4b4-4efb-9a8c-a84f1c8852d9] Running
	I0908 12:35:09.995647  561600 system_pods.go:61] "kube-ingress-dns-minikube" [12d291f8-3941-44b2-a2e3-365024d843fa] Pending
	I0908 12:35:09.995667  561600 system_pods.go:61] "kube-proxy-lz2kz" [1ea0f923-b54d-4501-87cb-ed1afce85b82] Running
	I0908 12:35:09.995691  561600 system_pods.go:61] "kube-scheduler-addons-090979" [0d505c91-26b5-4f5a-9e18-047012dbf03e] Running
	I0908 12:35:09.995723  561600 system_pods.go:61] "metrics-server-85b7d694d7-p5sf7" [2ab85486-40f7-420a-a23a-20e524ee6bd9] Pending
	I0908 12:35:09.995743  561600 system_pods.go:61] "nvidia-device-plugin-daemonset-8qq6w" [b6281aed-d538-40d1-9efe-6f733a1faf5f] Pending
	I0908 12:35:09.995776  561600 system_pods.go:61] "registry-66898fdd98-brlbg" [00043075-d8d2-4dc6-b57e-cecbd79fd981] Pending
	I0908 12:35:09.995809  561600 system_pods.go:61] "registry-creds-764b6fb674-wmz28" [b52ddb64-c692-4103-a2a7-ae97a03d3f5e] Pending
	I0908 12:35:09.995830  561600 system_pods.go:61] "registry-proxy-9pffk" [d9dbf333-7861-40eb-ab83-fc6661520da1] Pending
	I0908 12:35:09.995863  561600 system_pods.go:61] "snapshot-controller-7d9fbc56b8-65t4w" [c064f953-5728-48d4-8a68-3973a15f4545] Pending
	I0908 12:35:09.995892  561600 system_pods.go:61] "snapshot-controller-7d9fbc56b8-6xxp8" [dec9dc25-056f-4043-8a12-b9fc3594e85d] Pending
	I0908 12:35:09.995912  561600 system_pods.go:61] "storage-provisioner" [ad9aaa56-6919-469f-9e5a-8aea0e8c410e] Pending
	I0908 12:35:09.995944  561600 system_pods.go:74] duration metric: took 18.205902ms to wait for pod list to return data ...
	I0908 12:35:09.995978  561600 default_sa.go:34] waiting for default service account to be created ...
	I0908 12:35:10.010577  561600 default_sa.go:45] found service account: "default"
	I0908 12:35:10.010664  561600 default_sa.go:55] duration metric: took 14.663686ms for default service account to be created ...
	I0908 12:35:10.010691  561600 system_pods.go:116] waiting for k8s-apps to be running ...
	I0908 12:35:10.027682  561600 system_pods.go:86] 19 kube-system pods found
	I0908 12:35:10.027789  561600 system_pods.go:89] "coredns-66bc5c9577-fxjj6" [eb0abab8-f0b3-4c0b-b62c-c01110ecefd3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:35:10.027812  561600 system_pods.go:89] "csi-hostpath-attacher-0" [c311f5e7-213f-4003-bdc0-53b94380294b] Pending
	I0908 12:35:10.027835  561600 system_pods.go:89] "csi-hostpath-resizer-0" [8bcfa71c-57b3-4d1d-b4c7-3ef9469cfff4] Pending
	I0908 12:35:10.027872  561600 system_pods.go:89] "csi-hostpathplugin-m2nck" [61245166-7d10-480f-bd6f-31045ee52959] Pending
	I0908 12:35:10.027893  561600 system_pods.go:89] "etcd-addons-090979" [fcc2a589-2904-4d25-8262-fd9b2e746c19] Running
	I0908 12:35:10.027923  561600 system_pods.go:89] "kindnet-j2gn4" [205df6a6-8ed7-434e-bb95-de51d292e089] Running
	I0908 12:35:10.027965  561600 system_pods.go:89] "kube-apiserver-addons-090979" [d0b3ef68-ec86-45f8-9b10-728fec7da839] Running
	I0908 12:35:10.027987  561600 system_pods.go:89] "kube-controller-manager-addons-090979" [b7a65a9d-c4b4-4efb-9a8c-a84f1c8852d9] Running
	I0908 12:35:10.028010  561600 system_pods.go:89] "kube-ingress-dns-minikube" [12d291f8-3941-44b2-a2e3-365024d843fa] Pending
	I0908 12:35:10.028049  561600 system_pods.go:89] "kube-proxy-lz2kz" [1ea0f923-b54d-4501-87cb-ed1afce85b82] Running
	I0908 12:35:10.028069  561600 system_pods.go:89] "kube-scheduler-addons-090979" [0d505c91-26b5-4f5a-9e18-047012dbf03e] Running
	I0908 12:35:10.028093  561600 system_pods.go:89] "metrics-server-85b7d694d7-p5sf7" [2ab85486-40f7-420a-a23a-20e524ee6bd9] Pending
	I0908 12:35:10.028134  561600 system_pods.go:89] "nvidia-device-plugin-daemonset-8qq6w" [b6281aed-d538-40d1-9efe-6f733a1faf5f] Pending
	I0908 12:35:10.028156  561600 system_pods.go:89] "registry-66898fdd98-brlbg" [00043075-d8d2-4dc6-b57e-cecbd79fd981] Pending
	I0908 12:35:10.028180  561600 system_pods.go:89] "registry-creds-764b6fb674-wmz28" [b52ddb64-c692-4103-a2a7-ae97a03d3f5e] Pending
	I0908 12:35:10.028214  561600 system_pods.go:89] "registry-proxy-9pffk" [d9dbf333-7861-40eb-ab83-fc6661520da1] Pending
	I0908 12:35:10.028241  561600 system_pods.go:89] "snapshot-controller-7d9fbc56b8-65t4w" [c064f953-5728-48d4-8a68-3973a15f4545] Pending
	I0908 12:35:10.028264  561600 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6xxp8" [dec9dc25-056f-4043-8a12-b9fc3594e85d] Pending
	I0908 12:35:10.028299  561600 system_pods.go:89] "storage-provisioner" [ad9aaa56-6919-469f-9e5a-8aea0e8c410e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:35:10.028342  561600 retry.go:31] will retry after 299.489664ms: missing components: kube-dns
	I0908 12:35:10.343876  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:10.347000  561600 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0908 12:35:10.347074  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:10.392917  561600 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0908 12:35:10.392991  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:10.394517  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:10.395293  561600 system_pods.go:86] 19 kube-system pods found
	I0908 12:35:10.395327  561600 system_pods.go:89] "coredns-66bc5c9577-fxjj6" [eb0abab8-f0b3-4c0b-b62c-c01110ecefd3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:35:10.395335  561600 system_pods.go:89] "csi-hostpath-attacher-0" [c311f5e7-213f-4003-bdc0-53b94380294b] Pending
	I0908 12:35:10.395377  561600 system_pods.go:89] "csi-hostpath-resizer-0" [8bcfa71c-57b3-4d1d-b4c7-3ef9469cfff4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 12:35:10.395383  561600 system_pods.go:89] "csi-hostpathplugin-m2nck" [61245166-7d10-480f-bd6f-31045ee52959] Pending
	I0908 12:35:10.395395  561600 system_pods.go:89] "etcd-addons-090979" [fcc2a589-2904-4d25-8262-fd9b2e746c19] Running
	I0908 12:35:10.395403  561600 system_pods.go:89] "kindnet-j2gn4" [205df6a6-8ed7-434e-bb95-de51d292e089] Running
	I0908 12:35:10.395413  561600 system_pods.go:89] "kube-apiserver-addons-090979" [d0b3ef68-ec86-45f8-9b10-728fec7da839] Running
	I0908 12:35:10.395418  561600 system_pods.go:89] "kube-controller-manager-addons-090979" [b7a65a9d-c4b4-4efb-9a8c-a84f1c8852d9] Running
	I0908 12:35:10.395441  561600 system_pods.go:89] "kube-ingress-dns-minikube" [12d291f8-3941-44b2-a2e3-365024d843fa] Pending
	I0908 12:35:10.395456  561600 system_pods.go:89] "kube-proxy-lz2kz" [1ea0f923-b54d-4501-87cb-ed1afce85b82] Running
	I0908 12:35:10.395467  561600 system_pods.go:89] "kube-scheduler-addons-090979" [0d505c91-26b5-4f5a-9e18-047012dbf03e] Running
	I0908 12:35:10.395473  561600 system_pods.go:89] "metrics-server-85b7d694d7-p5sf7" [2ab85486-40f7-420a-a23a-20e524ee6bd9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:35:10.395478  561600 system_pods.go:89] "nvidia-device-plugin-daemonset-8qq6w" [b6281aed-d538-40d1-9efe-6f733a1faf5f] Pending
	I0908 12:35:10.395488  561600 system_pods.go:89] "registry-66898fdd98-brlbg" [00043075-d8d2-4dc6-b57e-cecbd79fd981] Pending
	I0908 12:35:10.395494  561600 system_pods.go:89] "registry-creds-764b6fb674-wmz28" [b52ddb64-c692-4103-a2a7-ae97a03d3f5e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 12:35:10.395499  561600 system_pods.go:89] "registry-proxy-9pffk" [d9dbf333-7861-40eb-ab83-fc6661520da1] Pending
	I0908 12:35:10.395503  561600 system_pods.go:89] "snapshot-controller-7d9fbc56b8-65t4w" [c064f953-5728-48d4-8a68-3973a15f4545] Pending
	I0908 12:35:10.395516  561600 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6xxp8" [dec9dc25-056f-4043-8a12-b9fc3594e85d] Pending
	I0908 12:35:10.395531  561600 system_pods.go:89] "storage-provisioner" [ad9aaa56-6919-469f-9e5a-8aea0e8c410e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:35:10.395551  561600 retry.go:31] will retry after 259.791669ms: missing components: kube-dns
	I0908 12:35:10.699246  561600 system_pods.go:86] 19 kube-system pods found
	I0908 12:35:10.699299  561600 system_pods.go:89] "coredns-66bc5c9577-fxjj6" [eb0abab8-f0b3-4c0b-b62c-c01110ecefd3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0908 12:35:10.699314  561600 system_pods.go:89] "csi-hostpath-attacher-0" [c311f5e7-213f-4003-bdc0-53b94380294b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 12:35:10.699393  561600 system_pods.go:89] "csi-hostpath-resizer-0" [8bcfa71c-57b3-4d1d-b4c7-3ef9469cfff4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 12:35:10.699399  561600 system_pods.go:89] "csi-hostpathplugin-m2nck" [61245166-7d10-480f-bd6f-31045ee52959] Pending
	I0908 12:35:10.699415  561600 system_pods.go:89] "etcd-addons-090979" [fcc2a589-2904-4d25-8262-fd9b2e746c19] Running
	I0908 12:35:10.699445  561600 system_pods.go:89] "kindnet-j2gn4" [205df6a6-8ed7-434e-bb95-de51d292e089] Running
	I0908 12:35:10.699462  561600 system_pods.go:89] "kube-apiserver-addons-090979" [d0b3ef68-ec86-45f8-9b10-728fec7da839] Running
	I0908 12:35:10.699483  561600 system_pods.go:89] "kube-controller-manager-addons-090979" [b7a65a9d-c4b4-4efb-9a8c-a84f1c8852d9] Running
	I0908 12:35:10.699508  561600 system_pods.go:89] "kube-ingress-dns-minikube" [12d291f8-3941-44b2-a2e3-365024d843fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 12:35:10.699516  561600 system_pods.go:89] "kube-proxy-lz2kz" [1ea0f923-b54d-4501-87cb-ed1afce85b82] Running
	I0908 12:35:10.699524  561600 system_pods.go:89] "kube-scheduler-addons-090979" [0d505c91-26b5-4f5a-9e18-047012dbf03e] Running
	I0908 12:35:10.699530  561600 system_pods.go:89] "metrics-server-85b7d694d7-p5sf7" [2ab85486-40f7-420a-a23a-20e524ee6bd9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:35:10.699552  561600 system_pods.go:89] "nvidia-device-plugin-daemonset-8qq6w" [b6281aed-d538-40d1-9efe-6f733a1faf5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 12:35:10.699573  561600 system_pods.go:89] "registry-66898fdd98-brlbg" [00043075-d8d2-4dc6-b57e-cecbd79fd981] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 12:35:10.699592  561600 system_pods.go:89] "registry-creds-764b6fb674-wmz28" [b52ddb64-c692-4103-a2a7-ae97a03d3f5e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 12:35:10.699605  561600 system_pods.go:89] "registry-proxy-9pffk" [d9dbf333-7861-40eb-ab83-fc6661520da1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 12:35:10.699611  561600 system_pods.go:89] "snapshot-controller-7d9fbc56b8-65t4w" [c064f953-5728-48d4-8a68-3973a15f4545] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 12:35:10.699626  561600 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6xxp8" [dec9dc25-056f-4043-8a12-b9fc3594e85d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 12:35:10.699644  561600 system_pods.go:89] "storage-provisioner" [ad9aaa56-6919-469f-9e5a-8aea0e8c410e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:35:10.699711  561600 retry.go:31] will retry after 349.885095ms: missing components: kube-dns
	I0908 12:35:10.735281  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:10.781267  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:10.890808  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:10.896497  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:11.097920  561600 system_pods.go:86] 19 kube-system pods found
	I0908 12:35:11.097980  561600 system_pods.go:89] "coredns-66bc5c9577-fxjj6" [eb0abab8-f0b3-4c0b-b62c-c01110ecefd3] Running
	I0908 12:35:11.097993  561600 system_pods.go:89] "csi-hostpath-attacher-0" [c311f5e7-213f-4003-bdc0-53b94380294b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0908 12:35:11.098003  561600 system_pods.go:89] "csi-hostpath-resizer-0" [8bcfa71c-57b3-4d1d-b4c7-3ef9469cfff4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0908 12:35:11.098011  561600 system_pods.go:89] "csi-hostpathplugin-m2nck" [61245166-7d10-480f-bd6f-31045ee52959] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0908 12:35:11.098017  561600 system_pods.go:89] "etcd-addons-090979" [fcc2a589-2904-4d25-8262-fd9b2e746c19] Running
	I0908 12:35:11.098023  561600 system_pods.go:89] "kindnet-j2gn4" [205df6a6-8ed7-434e-bb95-de51d292e089] Running
	I0908 12:35:11.098033  561600 system_pods.go:89] "kube-apiserver-addons-090979" [d0b3ef68-ec86-45f8-9b10-728fec7da839] Running
	I0908 12:35:11.098046  561600 system_pods.go:89] "kube-controller-manager-addons-090979" [b7a65a9d-c4b4-4efb-9a8c-a84f1c8852d9] Running
	I0908 12:35:11.098059  561600 system_pods.go:89] "kube-ingress-dns-minikube" [12d291f8-3941-44b2-a2e3-365024d843fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0908 12:35:11.098064  561600 system_pods.go:89] "kube-proxy-lz2kz" [1ea0f923-b54d-4501-87cb-ed1afce85b82] Running
	I0908 12:35:11.098069  561600 system_pods.go:89] "kube-scheduler-addons-090979" [0d505c91-26b5-4f5a-9e18-047012dbf03e] Running
	I0908 12:35:11.098076  561600 system_pods.go:89] "metrics-server-85b7d694d7-p5sf7" [2ab85486-40f7-420a-a23a-20e524ee6bd9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0908 12:35:11.098089  561600 system_pods.go:89] "nvidia-device-plugin-daemonset-8qq6w" [b6281aed-d538-40d1-9efe-6f733a1faf5f] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0908 12:35:11.098096  561600 system_pods.go:89] "registry-66898fdd98-brlbg" [00043075-d8d2-4dc6-b57e-cecbd79fd981] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0908 12:35:11.098103  561600 system_pods.go:89] "registry-creds-764b6fb674-wmz28" [b52ddb64-c692-4103-a2a7-ae97a03d3f5e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0908 12:35:11.098130  561600 system_pods.go:89] "registry-proxy-9pffk" [d9dbf333-7861-40eb-ab83-fc6661520da1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0908 12:35:11.098142  561600 system_pods.go:89] "snapshot-controller-7d9fbc56b8-65t4w" [c064f953-5728-48d4-8a68-3973a15f4545] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 12:35:11.098190  561600 system_pods.go:89] "snapshot-controller-7d9fbc56b8-6xxp8" [dec9dc25-056f-4043-8a12-b9fc3594e85d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0908 12:35:11.098210  561600 system_pods.go:89] "storage-provisioner" [ad9aaa56-6919-469f-9e5a-8aea0e8c410e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0908 12:35:11.098219  561600 system_pods.go:126] duration metric: took 1.087496829s to wait for k8s-apps to be running ...
	I0908 12:35:11.098227  561600 system_svc.go:44] waiting for kubelet service to be running ....
	I0908 12:35:11.098301  561600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:35:11.187068  561600 system_svc.go:56] duration metric: took 88.831672ms WaitForService to wait for kubelet
	I0908 12:35:11.187110  561600 kubeadm.go:578] duration metric: took 44.312653982s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:35:11.187130  561600 node_conditions.go:102] verifying NodePressure condition ...
	I0908 12:35:11.195617  561600 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0908 12:35:11.195663  561600 node_conditions.go:123] node cpu capacity is 2
	I0908 12:35:11.195676  561600 node_conditions.go:105] duration metric: took 8.539916ms to run NodePressure ...
	I0908 12:35:11.195688  561600 start.go:241] waiting for startup goroutines ...
	I0908 12:35:11.230022  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:11.275848  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:11.364560  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:11.366784  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:11.740078  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:11.776216  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:11.865052  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:11.867026  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:12.229910  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:12.271999  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:12.371645  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:12.375809  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:12.736048  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:12.771576  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:12.866127  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:12.867119  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:13.233084  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:13.270993  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:13.365111  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:13.367300  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:13.730421  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:13.770774  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:13.865988  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:13.867811  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:14.229980  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:14.272511  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:14.366243  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:14.369523  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:14.735806  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:14.833683  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:14.866393  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:14.869103  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:15.229739  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:15.271453  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:15.368977  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:15.370388  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:15.738224  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:15.772065  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:15.870158  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:15.870892  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:16.230151  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:16.279183  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:16.374990  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:16.378342  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:16.547543  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:35:16.741986  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:16.771816  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:16.867417  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:16.867691  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:17.239631  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:17.297447  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:17.383368  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:17.386707  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:17.745558  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:17.834461  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:17.874170  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:17.874196  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:17.932596  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.385012007s)
	W0908 12:35:17.932688  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:35:17.932722  561600 retry.go:31] will retry after 14.769385232s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:35:18.230060  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:18.271122  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:18.368315  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:18.467290  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:18.730249  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:18.774701  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:18.879181  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:18.880267  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:19.230318  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:19.270292  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:19.366820  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:19.367933  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:19.736250  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:19.786485  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:19.865957  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:19.866163  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:20.229714  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:20.271715  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:20.367516  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:20.376858  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:20.734001  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:20.772034  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:20.910794  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:20.911680  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:21.230008  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:21.270968  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:21.365667  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:21.366870  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:21.739742  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:21.771469  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:21.865603  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:21.868891  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:22.230655  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:22.271835  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:22.367143  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:22.387444  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:22.738181  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:22.772607  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:22.865959  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:22.867999  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:23.234184  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:23.272210  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:23.366820  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:23.367295  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:23.729958  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:23.771286  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:23.871342  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:23.877917  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:24.230186  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:24.271495  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:24.367390  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:24.368591  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:24.729952  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:24.771252  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:24.865726  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:24.866450  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:25.229657  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:25.270838  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:25.365620  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:25.367555  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:25.730103  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:25.771159  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:25.874339  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:25.874778  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:26.234326  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:26.271802  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:26.368474  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:26.373680  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:26.729916  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:26.801586  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:26.872528  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:26.872857  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:27.231285  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:27.271405  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:27.364828  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:27.368624  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:27.730656  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:27.771139  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:27.865008  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:27.878178  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:28.230581  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:28.277062  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:28.371417  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:28.371845  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:28.731161  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:28.770306  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:28.865719  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:28.867826  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:29.230632  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:29.270937  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:29.364666  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:29.366363  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:29.730752  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:29.770733  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:29.871952  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:29.872607  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:30.263157  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:30.343539  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:30.365763  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:30.367287  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:30.730375  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:30.770985  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:30.866676  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:30.866859  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:31.230335  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:31.270481  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:31.365763  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:31.366474  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:31.729580  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:31.770623  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:31.865483  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:31.866068  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:32.229638  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:32.270535  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:32.365570  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:32.367706  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:32.703006  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:35:32.729880  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:32.771370  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:32.865438  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:32.867167  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:33.230219  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:33.271055  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:33.373593  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:33.373704  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:33.730532  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:33.770561  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:33.845917  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.14286756s)
	W0908 12:35:33.845964  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:35:33.845983  561600 retry.go:31] will retry after 21.000573821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0908 12:35:33.866874  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:33.867066  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:34.230061  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:34.271817  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:34.368459  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:34.368956  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:34.734904  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:34.776220  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:34.865644  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:34.867211  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:35.230307  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:35.271449  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:35.365365  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:35.368719  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:35.730432  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:35.771350  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:35.866158  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:35.870950  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:36.229747  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:36.271109  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:36.366710  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:36.367269  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:36.730150  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:36.770227  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:36.865743  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:36.867121  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:37.229897  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:37.293719  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:37.368606  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:37.369049  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:37.729815  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:37.771026  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:37.867030  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:37.868387  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:38.230246  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:38.271394  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:38.371643  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:38.372608  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:38.729926  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:38.772213  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:38.864874  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:38.866804  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:39.230503  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:39.270925  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:39.367794  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:39.368751  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:39.738286  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:39.770306  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:39.865682  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:39.868838  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:40.231266  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:40.271172  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:40.366250  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:40.367108  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:40.729461  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:40.770740  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:40.865246  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:40.867475  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:41.229593  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:41.271772  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:41.367814  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:41.369497  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:41.735996  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:41.771908  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:41.877220  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:41.879636  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:42.246965  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:42.272114  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:42.371356  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:42.371965  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:42.730106  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:42.771077  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:42.865333  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:42.867478  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:43.230348  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:43.271197  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:43.366621  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:43.367820  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:43.736566  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:43.834260  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:43.934338  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:43.934598  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:44.230307  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:44.271523  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:44.365025  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:44.368005  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:44.731896  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:44.772270  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:44.864767  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:44.866731  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:45.236672  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:45.272873  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:45.368191  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:45.381093  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:45.730698  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:45.772015  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:45.887220  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:45.893436  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:46.229643  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:46.270981  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:46.365021  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:46.366506  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:46.729602  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:46.781983  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:46.867525  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:46.868213  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:47.229585  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:47.271368  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:47.364857  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:47.367436  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:47.729722  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:47.770830  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:47.866348  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:47.867085  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:48.229630  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:48.271583  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:48.365768  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:48.368979  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:48.730564  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:48.771791  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:48.867005  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:48.871046  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:49.229532  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:49.272564  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:49.365387  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:49.368418  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:49.734326  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:49.783426  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:49.868554  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:49.868923  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:50.230427  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:50.270950  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:50.364945  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:50.367211  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:50.734957  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:50.771502  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:50.865653  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:50.870249  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:51.229808  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:51.274034  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:51.366784  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:51.366957  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:51.730649  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:51.771891  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:51.868763  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:51.868902  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:52.230922  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:52.271236  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:52.374651  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:52.374841  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:52.731380  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:52.770393  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:52.864766  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:52.867686  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:53.229941  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:53.272124  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:53.367488  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:53.368213  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:53.730215  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:53.771471  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:53.868917  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:53.870186  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:54.230208  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:54.272138  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:54.379698  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:54.380223  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:54.739167  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:54.772991  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:54.846949  561600 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0908 12:35:54.869178  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:54.878158  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:55.229676  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:55.271127  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:55.378051  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:55.378539  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:55.731523  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:55.832565  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:55.937018  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:55.937039  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:56.025916  561600 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.178873112s)
	W0908 12:35:56.026001  561600 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0908 12:35:56.026116  561600 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0908 12:35:56.230147  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:56.274804  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:56.365642  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:56.366483  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:56.729879  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:56.771265  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:56.869301  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:56.869786  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:57.230920  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:57.272166  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:57.367083  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:57.367628  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:57.735581  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:57.771578  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:57.873840  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:57.874670  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:58.233042  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:58.292969  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:58.367224  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:58.367497  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:58.732276  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:58.775937  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:58.866413  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:58.867430  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:59.229538  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:59.270893  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:59.364793  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:59.366721  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:35:59.729928  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:35:59.770883  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:35:59.864950  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:35:59.867057  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:36:00.230738  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:00.282495  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:00.421368  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:36:00.434479  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:00.733006  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:00.839311  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:00.865389  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:00.868421  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:36:01.232863  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:01.273339  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:01.372268  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:01.372566  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:36:01.731020  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:01.773592  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:01.868142  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:01.869502  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:36:02.230734  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:02.271261  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:02.372974  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:02.374690  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:36:02.738341  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:02.839724  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:02.865432  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:02.866912  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0908 12:36:03.229910  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:03.271172  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:03.366381  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:03.366644  561600 kapi.go:107] duration metric: took 1m30.503297205s to wait for kubernetes.io/minikube-addons=registry ...
	I0908 12:36:03.731246  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:03.771060  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:03.865808  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:04.229991  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:04.271777  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:04.375937  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:04.729528  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:04.770664  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:04.865522  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:05.229552  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:05.271558  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:05.365396  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:05.729805  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:05.772306  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:05.866751  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:06.230538  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:06.272013  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:06.367852  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:06.732789  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:06.771012  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:06.865165  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:07.229905  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:07.273189  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:07.365421  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:07.729801  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:07.771389  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:07.864475  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:08.229880  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:08.271340  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:08.365432  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:08.729033  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:08.774691  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:08.869390  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:09.231014  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:09.272124  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:09.367122  561600 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0908 12:36:09.729582  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:09.773405  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:09.865352  561600 kapi.go:107] duration metric: took 1m37.004022325s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0908 12:36:10.229963  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:10.271832  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:10.761132  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:10.849760  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:11.231631  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:11.271939  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:11.731183  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:11.771818  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:12.230185  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:12.272013  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:12.730701  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:12.771299  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:13.230356  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:13.271428  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:13.730480  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:13.770983  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:14.230470  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:14.271376  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:14.736816  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:14.835185  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:15.229102  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:15.271709  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:15.730073  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:15.770922  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:16.229530  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:16.270714  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:16.729819  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:16.776095  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:17.231241  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:17.273302  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:17.730111  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:17.772423  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:18.230467  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:18.271981  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:18.730045  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:18.771706  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:19.230076  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0908 12:36:19.271116  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:19.730584  561600 kapi.go:107] duration metric: took 1m41.504244578s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0908 12:36:19.733593  561600 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-090979 cluster.
	I0908 12:36:19.736427  561600 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0908 12:36:19.739166  561600 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0908 12:36:19.831704  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:20.271815  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:20.770572  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:21.272555  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:21.801911  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:22.271771  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:22.770789  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:23.270861  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:23.782528  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:24.271984  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:24.771958  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:25.270278  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:25.771177  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:26.271641  561600 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0908 12:36:26.772036  561600 kapi.go:107] duration metric: took 1m53.504823216s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0908 12:36:26.775189  561600 out.go:179] * Enabled addons: nvidia-device-plugin, cloud-spanner, amd-gpu-device-plugin, registry-creds, storage-provisioner-rancher, storage-provisioner, ingress-dns, metrics-server, default-storageclass, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0908 12:36:26.778042  561600 addons.go:514] duration metric: took 1m59.903159512s for enable addons: enabled=[nvidia-device-plugin cloud-spanner amd-gpu-device-plugin registry-creds storage-provisioner-rancher storage-provisioner ingress-dns metrics-server default-storageclass yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0908 12:36:26.778088  561600 start.go:246] waiting for cluster config update ...
	I0908 12:36:26.778110  561600 start.go:255] writing updated cluster config ...
	I0908 12:36:26.778418  561600 ssh_runner.go:195] Run: rm -f paused
	I0908 12:36:26.782307  561600 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:36:26.789954  561600 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fxjj6" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:26.796058  561600 pod_ready.go:94] pod "coredns-66bc5c9577-fxjj6" is "Ready"
	I0908 12:36:26.796130  561600 pod_ready.go:86] duration metric: took 6.142552ms for pod "coredns-66bc5c9577-fxjj6" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:26.798513  561600 pod_ready.go:83] waiting for pod "etcd-addons-090979" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:26.803287  561600 pod_ready.go:94] pod "etcd-addons-090979" is "Ready"
	I0908 12:36:26.803374  561600 pod_ready.go:86] duration metric: took 4.831225ms for pod "etcd-addons-090979" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:26.805716  561600 pod_ready.go:83] waiting for pod "kube-apiserver-addons-090979" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:26.810409  561600 pod_ready.go:94] pod "kube-apiserver-addons-090979" is "Ready"
	I0908 12:36:26.810436  561600 pod_ready.go:86] duration metric: took 4.696316ms for pod "kube-apiserver-addons-090979" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:26.812879  561600 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-090979" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:27.186369  561600 pod_ready.go:94] pod "kube-controller-manager-addons-090979" is "Ready"
	I0908 12:36:27.186400  561600 pod_ready.go:86] duration metric: took 373.494395ms for pod "kube-controller-manager-addons-090979" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:27.386454  561600 pod_ready.go:83] waiting for pod "kube-proxy-lz2kz" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:27.786788  561600 pod_ready.go:94] pod "kube-proxy-lz2kz" is "Ready"
	I0908 12:36:27.786815  561600 pod_ready.go:86] duration metric: took 400.330899ms for pod "kube-proxy-lz2kz" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:27.990157  561600 pod_ready.go:83] waiting for pod "kube-scheduler-addons-090979" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:28.386355  561600 pod_ready.go:94] pod "kube-scheduler-addons-090979" is "Ready"
	I0908 12:36:28.386384  561600 pod_ready.go:86] duration metric: took 396.192248ms for pod "kube-scheduler-addons-090979" in "kube-system" namespace to be "Ready" or be gone ...
	I0908 12:36:28.386396  561600 pod_ready.go:40] duration metric: took 1.604057364s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0908 12:36:28.439694  561600 start.go:617] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0908 12:36:28.442947  561600 out.go:179] * Done! kubectl is now configured to use "addons-090979" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 08 12:38:21 addons-090979 crio[993]: time="2025-09-08 12:38:21.950876166Z" level=info msg="Removed pod sandbox: cc1edadcd0cdaa75244f1ea150ac08759c0956215c157bbf2b8455ea086b8ae7" id=f822f73e-5154-43e7-8d8d-7cc677679e6e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 08 12:40:26 addons-090979 crio[993]: time="2025-09-08 12:40:26.677816644Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-ghcb9/POD" id=c3ed42f4-6fea-43e4-a3b9-335ca5361371 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 12:40:26 addons-090979 crio[993]: time="2025-09-08 12:40:26.677884927Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 12:40:26 addons-090979 crio[993]: time="2025-09-08 12:40:26.744133074Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-ghcb9 Namespace:default ID:a9de36b96bdd14df298f7ae227997b73cb805c53448cbbf71ac5b9e8a0562167 UID:6f217ce6-c280-4547-810c-3dc2123f6f5b NetNS:/var/run/netns/df6ff7e0-c1e9-4a61-8c65-69aad7fb04e6 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 12:40:26 addons-090979 crio[993]: time="2025-09-08 12:40:26.744183478Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-ghcb9 to CNI network \"kindnet\" (type=ptp)"
	Sep 08 12:40:26 addons-090979 crio[993]: time="2025-09-08 12:40:26.766101308Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-ghcb9 Namespace:default ID:a9de36b96bdd14df298f7ae227997b73cb805c53448cbbf71ac5b9e8a0562167 UID:6f217ce6-c280-4547-810c-3dc2123f6f5b NetNS:/var/run/netns/df6ff7e0-c1e9-4a61-8c65-69aad7fb04e6 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 08 12:40:26 addons-090979 crio[993]: time="2025-09-08 12:40:26.766270351Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-ghcb9 for CNI network kindnet (type=ptp)"
	Sep 08 12:40:26 addons-090979 crio[993]: time="2025-09-08 12:40:26.769709026Z" level=info msg="Ran pod sandbox a9de36b96bdd14df298f7ae227997b73cb805c53448cbbf71ac5b9e8a0562167 with infra container: default/hello-world-app-5d498dc89-ghcb9/POD" id=c3ed42f4-6fea-43e4-a3b9-335ca5361371 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 08 12:40:26 addons-090979 crio[993]: time="2025-09-08 12:40:26.770891787Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a3cdabc8-2ed4-45f4-b42c-070e8df0c5fe name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:40:26 addons-090979 crio[993]: time="2025-09-08 12:40:26.771117577Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=a3cdabc8-2ed4-45f4-b42c-070e8df0c5fe name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:40:26 addons-090979 crio[993]: time="2025-09-08 12:40:26.772004146Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=02772f6a-94e1-4c22-8e72-bd894e179b8b name=/runtime.v1.ImageService/PullImage
	Sep 08 12:40:26 addons-090979 crio[993]: time="2025-09-08 12:40:26.774838576Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 08 12:40:27 addons-090979 crio[993]: time="2025-09-08 12:40:27.026688755Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 08 12:40:27 addons-090979 crio[993]: time="2025-09-08 12:40:27.790676209Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=02772f6a-94e1-4c22-8e72-bd894e179b8b name=/runtime.v1.ImageService/PullImage
	Sep 08 12:40:27 addons-090979 crio[993]: time="2025-09-08 12:40:27.791524510Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=650e4648-64f9-45c7-b95e-ca02eb7d2820 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:40:27 addons-090979 crio[993]: time="2025-09-08 12:40:27.792163267Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=650e4648-64f9-45c7-b95e-ca02eb7d2820 name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:40:27 addons-090979 crio[993]: time="2025-09-08 12:40:27.793076388Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ea32880e-d1c8-4a78-bece-5088e744f96e name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:40:27 addons-090979 crio[993]: time="2025-09-08 12:40:27.793695805Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ea32880e-d1c8-4a78-bece-5088e744f96e name=/runtime.v1.ImageService/ImageStatus
	Sep 08 12:40:27 addons-090979 crio[993]: time="2025-09-08 12:40:27.799763371Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-ghcb9/hello-world-app" id=2696c0f6-b689-4b07-9e01-38c40b464de0 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 12:40:27 addons-090979 crio[993]: time="2025-09-08 12:40:27.799876939Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 08 12:40:27 addons-090979 crio[993]: time="2025-09-08 12:40:27.823437495Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/976b8f3403b1560a99cd2242b777ae22c36b635aeabc4bf1a71dbf28ff98e376/merged/etc/passwd: no such file or directory"
	Sep 08 12:40:27 addons-090979 crio[993]: time="2025-09-08 12:40:27.823621151Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/976b8f3403b1560a99cd2242b777ae22c36b635aeabc4bf1a71dbf28ff98e376/merged/etc/group: no such file or directory"
	Sep 08 12:40:27 addons-090979 crio[993]: time="2025-09-08 12:40:27.913227580Z" level=info msg="Created container 2d1aa0ccffa3d2bfdbc66b0658db7959c06655012964e845b1d74fba1b390108: default/hello-world-app-5d498dc89-ghcb9/hello-world-app" id=2696c0f6-b689-4b07-9e01-38c40b464de0 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 08 12:40:27 addons-090979 crio[993]: time="2025-09-08 12:40:27.914031490Z" level=info msg="Starting container: 2d1aa0ccffa3d2bfdbc66b0658db7959c06655012964e845b1d74fba1b390108" id=b3bc3711-1fce-40c3-94a3-cb3c8ba18c14 name=/runtime.v1.RuntimeService/StartContainer
	Sep 08 12:40:27 addons-090979 crio[993]: time="2025-09-08 12:40:27.927205290Z" level=info msg="Started container" PID=9905 containerID=2d1aa0ccffa3d2bfdbc66b0658db7959c06655012964e845b1d74fba1b390108 description=default/hello-world-app-5d498dc89-ghcb9/hello-world-app id=b3bc3711-1fce-40c3-94a3-cb3c8ba18c14 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a9de36b96bdd14df298f7ae227997b73cb805c53448cbbf71ac5b9e8a0562167
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	2d1aa0ccffa3d       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   a9de36b96bdd1       hello-world-app-5d498dc89-ghcb9
	0f35ab14f57cc       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   66946ab1f3a8d       nginx
	aa03702e4cac4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   1fcb05b1e3360       busybox
	6b34878700e0f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:b3f8a40cecf84afd8a5299442eab04c52f913ef6194e01dc4fbeb783f9d42c58            4 minutes ago            Running             gadget                    0                   6cbca808477a9       gadget-hfwws
	0daee4a45715d       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             4 minutes ago            Running             controller                0                   91aa5d716a23b       ingress-nginx-controller-9cc49f96f-7pt5r
	d8e775d6bd7ab       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958               4 minutes ago            Running             minikube-ingress-dns      0                   68fe00e87f650       kube-ingress-dns-minikube
	40b51230dfa88       c67c707f59d87e1add5896e856d3ed36fbff2a778620f70d33b799e0541a77e3                                                             4 minutes ago            Exited              patch                     2                   785c3e0223f4e       ingress-nginx-admission-patch-5bwmx
	ef81bc0078da7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   5 minutes ago            Exited              create                    0                   a0c41847cf30b       ingress-nginx-admission-create-7phvc
	407ea665b7e8f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago            Running             storage-provisioner       0                   1f2e433a3d8ae       storage-provisioner
	7adf4eafb767e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                             5 minutes ago            Running             coredns                   0                   84be011dec835       coredns-66bc5c9577-fxjj6
	f1e1272a8326d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                             6 minutes ago            Running             kindnet-cni               0                   da1785c5bf2db       kindnet-j2gn4
	be67b32cdae18       6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf                                                             6 minutes ago            Running             kube-proxy                0                   4949c819d130c       kube-proxy-lz2kz
	3f0362957632a       d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be                                                             6 minutes ago            Running             kube-apiserver            0                   f5defbb3fb964       kube-apiserver-addons-090979
	dabee25f6789a       a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee                                                             6 minutes ago            Running             kube-scheduler            0                   197c789f50552       kube-scheduler-addons-090979
	809b7be18661b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                             6 minutes ago            Running             etcd                      0                   3dd0dc1d0bd3a       etcd-addons-090979
	79dfaa5ec4a51       996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570                                                             6 minutes ago            Running             kube-controller-manager   0                   d23dc6c565eb3       kube-controller-manager-addons-090979
	
	
	==> coredns [7adf4eafb767e0b79eaca2f19a84fcdff5ba968cf8914fd4616f18ece2649116] <==
	[INFO] 10.244.0.17:41403 - 48809 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004295058s
	[INFO] 10.244.0.17:41403 - 5072 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.002849133s
	[INFO] 10.244.0.17:41403 - 31367 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.002867833s
	[INFO] 10.244.0.17:52358 - 36470 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000176937s
	[INFO] 10.244.0.17:52358 - 36282 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000193642s
	[INFO] 10.244.0.17:52531 - 42706 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126499s
	[INFO] 10.244.0.17:52531 - 42278 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00012316s
	[INFO] 10.244.0.17:33783 - 16249 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000121707s
	[INFO] 10.244.0.17:33783 - 16044 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000122454s
	[INFO] 10.244.0.17:48843 - 33700 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001488977s
	[INFO] 10.244.0.17:48843 - 33528 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001483561s
	[INFO] 10.244.0.17:43369 - 13823 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000114348s
	[INFO] 10.244.0.17:43369 - 13378 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071886s
	[INFO] 10.244.0.21:42112 - 32201 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000189171s
	[INFO] 10.244.0.21:39786 - 47813 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000152263s
	[INFO] 10.244.0.21:55243 - 21076 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143303s
	[INFO] 10.244.0.21:33120 - 56137 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001089s
	[INFO] 10.244.0.21:51816 - 44232 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000212145s
	[INFO] 10.244.0.21:38061 - 23382 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000351666s
	[INFO] 10.244.0.21:37978 - 45831 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002347493s
	[INFO] 10.244.0.21:56807 - 17290 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.007284483s
	[INFO] 10.244.0.21:33019 - 22709 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001356628s
	[INFO] 10.244.0.21:38911 - 57972 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.005962243s
	[INFO] 10.244.0.24:34551 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0001948s
	[INFO] 10.244.0.24:38146 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000182311s
	
	
	==> describe nodes <==
	Name:               addons-090979
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-090979
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3f6dd380c737091fd766d425b85ffa6c4f72b9ba
	                    minikube.k8s.io/name=addons-090979
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_08T12_34_22_0700
	                    minikube.k8s.io/version=v1.36.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-090979
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Sep 2025 12:34:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-090979
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Sep 2025 12:40:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Sep 2025 12:38:26 +0000   Mon, 08 Sep 2025 12:34:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Sep 2025 12:38:26 +0000   Mon, 08 Sep 2025 12:34:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Sep 2025 12:38:26 +0000   Mon, 08 Sep 2025 12:34:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Sep 2025 12:38:26 +0000   Mon, 08 Sep 2025 12:35:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-090979
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 6a435d5c9b3e4e03b064f474e07a670c
	  System UUID:                fda39e5e-d67f-4c86-a164-8a02a53e8e29
	  Boot ID:                    96333a60-ea75-4725-84ac-97579709a820
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  default                     hello-world-app-5d498dc89-ghcb9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-hfwws                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-7pt5r    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m56s
	  kube-system                 coredns-66bc5c9577-fxjj6                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m1s
	  kube-system                 etcd-addons-090979                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m7s
	  kube-system                 kindnet-j2gn4                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m2s
	  kube-system                 kube-apiserver-addons-090979                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-controller-manager-addons-090979       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-proxy-lz2kz                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-scheduler-addons-090979                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             310Mi (3%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m56s                  kube-proxy       
	  Normal   Starting                 6m14s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m14s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m14s (x2 over 6m14s)  kubelet          Node addons-090979 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m14s                  kubelet          Node addons-090979 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m14s                  kubelet          Node addons-090979 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m7s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m7s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m7s                   kubelet          Node addons-090979 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s                   kubelet          Node addons-090979 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m7s                   kubelet          Node addons-090979 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s                   node-controller  Node addons-090979 event: Registered Node addons-090979 in Controller
	  Normal   NodeReady                5m19s                  kubelet          Node addons-090979 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep 8 11:17] kauditd_printk_skb: 8 callbacks suppressed
	[Sep 8 11:40] hrtimer: interrupt took 32417879 ns
	[  +5.640282] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep 8 12:32] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [809b7be18661b03668391df1e423b3df6ea01eaa8c45689cc9d67ecc31a994f0] <==
	{"level":"warn","ts":"2025-09-08T12:34:30.285883Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.740281ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"warn","ts":"2025-09-08T12:34:30.285949Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.850518ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"warn","ts":"2025-09-08T12:34:30.285991Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.070888ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-09-08T12:34:30.286115Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.386316ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T12:34:30.286294Z","caller":"traceutil/trace.go:172","msg":"trace[2108625752] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"106.513514ms","start":"2025-09-08T12:34:30.179772Z","end":"2025-09-08T12:34:30.286285Z","steps":["trace[2108625752] 'process raft request'  (duration: 106.250038ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T12:34:30.350268Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T12:34:29.914385Z","time spent":"435.82642ms","remote":"127.0.0.1:58708","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4683,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kindnet\" mod_revision:318 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kindnet\" value_size:4635 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kindnet\" > >"}
	{"level":"warn","ts":"2025-09-08T12:34:30.350385Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-09-08T12:34:29.946909Z","time spent":"403.461411ms","remote":"127.0.0.1:57952","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":714,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-lz2kz.18634ec0cb150e3c\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-lz2kz.18634ec0cb150e3c\" value_size:634 lease:8128039830243210547 >> failure:<>"}
	{"level":"info","ts":"2025-09-08T12:34:30.350477Z","caller":"traceutil/trace.go:172","msg":"trace[2127057365] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:372; }","duration":"189.346459ms","start":"2025-09-08T12:34:30.161120Z","end":"2025-09-08T12:34:30.350466Z","steps":["trace[2127057365] 'agreement among raft nodes before linearized reading'  (duration: 124.611894ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T12:34:30.350556Z","caller":"traceutil/trace.go:172","msg":"trace[1718294761] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:372; }","duration":"189.456531ms","start":"2025-09-08T12:34:30.161094Z","end":"2025-09-08T12:34:30.350550Z","steps":["trace[1718294761] 'agreement among raft nodes before linearized reading'  (duration: 124.819575ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T12:34:30.350609Z","caller":"traceutil/trace.go:172","msg":"trace[306898471] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:372; }","duration":"253.687996ms","start":"2025-09-08T12:34:30.096915Z","end":"2025-09-08T12:34:30.350603Z","steps":["trace[306898471] 'agreement among raft nodes before linearized reading'  (duration: 189.048275ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T12:34:30.350692Z","caller":"traceutil/trace.go:172","msg":"trace[431302161] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:373; }","duration":"170.963826ms","start":"2025-09-08T12:34:30.179723Z","end":"2025-09-08T12:34:30.350687Z","steps":["trace[431302161] 'agreement among raft nodes before linearized reading'  (duration: 106.374624ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T12:34:30.648972Z","caller":"traceutil/trace.go:172","msg":"trace[1571292938] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"219.762747ms","start":"2025-09-08T12:34:30.429195Z","end":"2025-09-08T12:34:30.648958Z","steps":["trace[1571292938] 'process raft request'  (duration: 134.445709ms)","trace[1571292938] 'compare'  (duration: 85.091626ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-08T12:34:30.873354Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.219033ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/registry\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-09-08T12:34:30.873467Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.873363ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-08T12:34:30.873491Z","caller":"traceutil/trace.go:172","msg":"trace[370921461] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:381; }","duration":"145.896321ms","start":"2025-09-08T12:34:30.727585Z","end":"2025-09-08T12:34:30.873482Z","steps":["trace[370921461] 'agreement among raft nodes before linearized reading'  (duration: 53.572611ms)","trace[370921461] 'range keys from in-memory index tree'  (duration: 92.291127ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T12:34:30.873521Z","caller":"traceutil/trace.go:172","msg":"trace[595118467] range","detail":"{range_begin:/registry/deployments/kube-system/registry; range_end:; response_count:0; response_revision:381; }","duration":"145.31587ms","start":"2025-09-08T12:34:30.728103Z","end":"2025-09-08T12:34:30.873418Z","steps":["trace[595118467] 'agreement among raft nodes before linearized reading'  (duration: 53.039931ms)","trace[595118467] 'range keys from in-memory index tree'  (duration: 92.157867ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T12:34:30.874143Z","caller":"traceutil/trace.go:172","msg":"trace[1540145709] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"133.761365ms","start":"2025-09-08T12:34:30.740370Z","end":"2025-09-08T12:34:30.874131Z","steps":["trace[1540145709] 'process raft request'  (duration: 41.379316ms)","trace[1540145709] 'compare'  (duration: 92.084061ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-08T12:34:30.874418Z","caller":"traceutil/trace.go:172","msg":"trace[1483071667] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"130.196363ms","start":"2025-09-08T12:34:30.744215Z","end":"2025-09-08T12:34:30.874411Z","steps":["trace[1483071667] 'process raft request'  (duration: 129.743224ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-08T12:34:30.874688Z","caller":"traceutil/trace.go:172","msg":"trace[1164394249] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"107.75819ms","start":"2025-09-08T12:34:30.766923Z","end":"2025-09-08T12:34:30.874681Z","steps":["trace[1164394249] 'process raft request'  (duration: 107.690645ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-08T12:34:33.798485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:34:33.818528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:34:55.967791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:34:55.999319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:34:56.054809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-08T12:34:56.076096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32884","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:40:28 up  2:23,  0 users,  load average: 0.21, 1.33, 2.50
	Linux addons-090979 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [f1e1272a8326d495d072bcf4f9eeb71b5e5bdb8859d19414a541fdcd19ba2ac9] <==
	I0908 12:38:19.235909       1 main.go:301] handling current node
	I0908 12:38:29.236425       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:38:29.236546       1 main.go:301] handling current node
	I0908 12:38:39.242316       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:38:39.242441       1 main.go:301] handling current node
	I0908 12:38:49.241910       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:38:49.242041       1 main.go:301] handling current node
	I0908 12:38:59.237884       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:38:59.237921       1 main.go:301] handling current node
	I0908 12:39:09.240031       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:39:09.240154       1 main.go:301] handling current node
	I0908 12:39:19.242063       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:39:19.242101       1 main.go:301] handling current node
	I0908 12:39:29.236462       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:39:29.236554       1 main.go:301] handling current node
	I0908 12:39:39.239537       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:39:39.239656       1 main.go:301] handling current node
	I0908 12:39:49.242046       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:39:49.242080       1 main.go:301] handling current node
	I0908 12:39:59.239538       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:39:59.239674       1 main.go:301] handling current node
	I0908 12:40:09.237635       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:40:09.237699       1 main.go:301] handling current node
	I0908 12:40:19.243340       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0908 12:40:19.243508       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3f0362957632a77793f856d649efa0c261c79b424d19e66532000f5c3ae83357] <==
	I0908 12:36:50.365037       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.101.184"}
	I0908 12:37:23.086930       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0908 12:37:39.114255       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0908 12:37:47.930568       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0908 12:38:05.117405       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 12:38:05.117541       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 12:38:05.159031       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 12:38:05.159102       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 12:38:05.171085       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 12:38:05.171301       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 12:38:05.185346       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 12:38:05.185501       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0908 12:38:05.226645       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0908 12:38:05.227416       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0908 12:38:06.172205       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0908 12:38:06.227419       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0908 12:38:06.349895       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0908 12:38:06.624594       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0908 12:38:07.091968       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.123.40"}
	I0908 12:38:07.318182       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:38:27.906481       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0908 12:38:34.344746       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:39:28.821740       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:39:50.762682       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0908 12:40:26.578557       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.107.10"}
	
	
	==> kube-controller-manager [79dfaa5ec4a518857628572119cf8a328796e1d1098510172d2f6a3b4c549a84] <==
	E0908 12:38:25.074729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 12:38:25.864980       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 12:38:25.866115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0908 12:38:26.157864       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0908 12:38:26.157920       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0908 12:38:26.190026       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0908 12:38:26.190091       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	E0908 12:38:38.303411       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 12:38:38.304483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 12:38:44.747631       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 12:38:44.748627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 12:38:47.076807       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 12:38:47.077932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 12:39:20.002519       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 12:39:20.004052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 12:39:28.319694       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 12:39:28.320717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 12:39:32.852599       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 12:39:32.853748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 12:39:54.086938       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 12:39:54.087963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 12:40:08.438533       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 12:40:08.439667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0908 12:40:15.327877       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0908 12:40:15.329082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [be67b32cdae18085608552a7528839dc431534516c870b978314bab89df906bc] <==
	I0908 12:34:31.443713       1 server_linux.go:53] "Using iptables proxy"
	I0908 12:34:31.747946       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0908 12:34:32.057475       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0908 12:34:32.085914       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0908 12:34:32.174479       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0908 12:34:32.198236       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0908 12:34:32.198360       1 server_linux.go:132] "Using iptables Proxier"
	I0908 12:34:32.207063       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0908 12:34:32.207509       1 server.go:527] "Version info" version="v1.34.0"
	I0908 12:34:32.207749       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0908 12:34:32.209129       1 config.go:200] "Starting service config controller"
	I0908 12:34:32.209206       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0908 12:34:32.209253       1 config.go:106] "Starting endpoint slice config controller"
	I0908 12:34:32.209301       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0908 12:34:32.209342       1 config.go:403] "Starting serviceCIDR config controller"
	I0908 12:34:32.209383       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0908 12:34:32.246654       1 config.go:309] "Starting node config controller"
	I0908 12:34:32.246749       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0908 12:34:32.246781       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0908 12:34:32.326684       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0908 12:34:32.337862       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0908 12:34:32.340541       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [dabee25f6789a9238de5dda7fc1ab78ec823ed0a454c912e75d91909b02466b2] <==
	E0908 12:34:19.034701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 12:34:19.034750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0908 12:34:19.034796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 12:34:19.034843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0908 12:34:19.034897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 12:34:19.034948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 12:34:19.035014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 12:34:19.035855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 12:34:19.042167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 12:34:19.042336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0908 12:34:19.901947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0908 12:34:19.953233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0908 12:34:19.975749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0908 12:34:19.996268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0908 12:34:20.081136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0908 12:34:20.102143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0908 12:34:20.104700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0908 12:34:20.105960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0908 12:34:20.116456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0908 12:34:20.127960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0908 12:34:20.159149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0908 12:34:20.175590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0908 12:34:20.224792       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0908 12:34:20.238939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0908 12:34:22.182426       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 08 12:39:51 addons-090979 kubelet[1516]: E0908 12:39:51.728865    1516 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757335191728590286 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 08 12:39:51 addons-090979 kubelet[1516]: E0908 12:39:51.728913    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757335191728590286 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 08 12:40:01 addons-090979 kubelet[1516]: E0908 12:40:01.731669    1516 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757335201731229999 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 08 12:40:01 addons-090979 kubelet[1516]: E0908 12:40:01.731709    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757335201731229999 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 08 12:40:10 addons-090979 kubelet[1516]: I0908 12:40:10.498715    1516 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 08 12:40:10 addons-090979 kubelet[1516]: E0908 12:40:10.631843    1516 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a783b93e939e770ed96e0dfd08573aa89a655eb49ef6f8031af7365cb4edf770/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a783b93e939e770ed96e0dfd08573aa89a655eb49ef6f8031af7365cb4edf770/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 12:40:10 addons-090979 kubelet[1516]: E0908 12:40:10.939663    1516 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e63b817a25c819eb18ef3a7f88486bc098da86ddbc231a406b4def297745dbaa/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e63b817a25c819eb18ef3a7f88486bc098da86ddbc231a406b4def297745dbaa/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 12:40:10 addons-090979 kubelet[1516]: E0908 12:40:10.974636    1516 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/afd57ecb2ae3a3dde9e5587572d8bf895d7f8553ddad57e21574a2fab09f0796/diff" to get inode usage: stat /var/lib/containers/storage/overlay/afd57ecb2ae3a3dde9e5587572d8bf895d7f8553ddad57e21574a2fab09f0796/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 12:40:10 addons-090979 kubelet[1516]: E0908 12:40:10.991626    1516 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c838958bd8177467b6ca29650e609fd043adc2b53448d1c9554af12617af1d8d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c838958bd8177467b6ca29650e609fd043adc2b53448d1c9554af12617af1d8d/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 12:40:11 addons-090979 kubelet[1516]: E0908 12:40:11.734999    1516 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757335211734726676 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 08 12:40:11 addons-090979 kubelet[1516]: E0908 12:40:11.735041    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757335211734726676 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 08 12:40:14 addons-090979 kubelet[1516]: E0908 12:40:14.800027    1516 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e5d625a3cb84ad8451ab93a231f78200f2e7ca362146c86ffa8adb0d00eef05a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e5d625a3cb84ad8451ab93a231f78200f2e7ca362146c86ffa8adb0d00eef05a/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 12:40:21 addons-090979 kubelet[1516]: E0908 12:40:21.602592    1516 manager.go:1116] Failed to create existing container: /crio-5bcbc826f31646e7dfe02fdcba39f18ab98743293ad70c1f294d3dde18d1d9d1: Error finding container 5bcbc826f31646e7dfe02fdcba39f18ab98743293ad70c1f294d3dde18d1d9d1: Status 404 returned error can't find the container with id 5bcbc826f31646e7dfe02fdcba39f18ab98743293ad70c1f294d3dde18d1d9d1
	Sep 08 12:40:21 addons-090979 kubelet[1516]: E0908 12:40:21.620668    1516 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e5d625a3cb84ad8451ab93a231f78200f2e7ca362146c86ffa8adb0d00eef05a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e5d625a3cb84ad8451ab93a231f78200f2e7ca362146c86ffa8adb0d00eef05a/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 12:40:21 addons-090979 kubelet[1516]: E0908 12:40:21.627126    1516 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fb2544f297c1c226d091487fed25ffd6692257abf205a393d400041e718a783d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fb2544f297c1c226d091487fed25ffd6692257abf205a393d400041e718a783d/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 12:40:21 addons-090979 kubelet[1516]: E0908 12:40:21.627301    1516 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c838958bd8177467b6ca29650e609fd043adc2b53448d1c9554af12617af1d8d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c838958bd8177467b6ca29650e609fd043adc2b53448d1c9554af12617af1d8d/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 12:40:21 addons-090979 kubelet[1516]: E0908 12:40:21.627324    1516 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/94322f6b9cf433e546bb1d27aca26828ea58280525ec4f8f6f814a0b819ae815/diff" to get inode usage: stat /var/lib/containers/storage/overlay/94322f6b9cf433e546bb1d27aca26828ea58280525ec4f8f6f814a0b819ae815/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 12:40:21 addons-090979 kubelet[1516]: E0908 12:40:21.637587    1516 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/fb2544f297c1c226d091487fed25ffd6692257abf205a393d400041e718a783d/diff" to get inode usage: stat /var/lib/containers/storage/overlay/fb2544f297c1c226d091487fed25ffd6692257abf205a393d400041e718a783d/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 12:40:21 addons-090979 kubelet[1516]: E0908 12:40:21.651664    1516 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/94322f6b9cf433e546bb1d27aca26828ea58280525ec4f8f6f814a0b819ae815/diff" to get inode usage: stat /var/lib/containers/storage/overlay/94322f6b9cf433e546bb1d27aca26828ea58280525ec4f8f6f814a0b819ae815/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 12:40:21 addons-090979 kubelet[1516]: E0908 12:40:21.680289    1516 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e63b817a25c819eb18ef3a7f88486bc098da86ddbc231a406b4def297745dbaa/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e63b817a25c819eb18ef3a7f88486bc098da86ddbc231a406b4def297745dbaa/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 12:40:21 addons-090979 kubelet[1516]: E0908 12:40:21.690503    1516 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/151ec5b67c631b6921e54b8ec35342ccfeda714d52435ecf36f8a95e83d3fd7c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/151ec5b67c631b6921e54b8ec35342ccfeda714d52435ecf36f8a95e83d3fd7c/diff: no such file or directory, extraDiskErr: <nil>
	Sep 08 12:40:21 addons-090979 kubelet[1516]: E0908 12:40:21.738210    1516 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1757335221737978160 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 08 12:40:21 addons-090979 kubelet[1516]: E0908 12:40:21.738242    1516 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1757335221737978160 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:597483} inodes_used:{value:225}}"
	Sep 08 12:40:26 addons-090979 kubelet[1516]: I0908 12:40:26.388922    1516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mlg\" (UniqueName: \"kubernetes.io/projected/6f217ce6-c280-4547-810c-3dc2123f6f5b-kube-api-access-w8mlg\") pod \"hello-world-app-5d498dc89-ghcb9\" (UID: \"6f217ce6-c280-4547-810c-3dc2123f6f5b\") " pod="default/hello-world-app-5d498dc89-ghcb9"
	Sep 08 12:40:26 addons-090979 kubelet[1516]: W0908 12:40:26.768648    1516 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c0b2b1561bc05f50862f30fcec130c66e1f3e485c8f875e8b8cb86d54c1b8f12/crio-a9de36b96bdd14df298f7ae227997b73cb805c53448cbbf71ac5b9e8a0562167 WatchSource:0}: Error finding container a9de36b96bdd14df298f7ae227997b73cb805c53448cbbf71ac5b9e8a0562167: Status 404 returned error can't find the container with id a9de36b96bdd14df298f7ae227997b73cb805c53448cbbf71ac5b9e8a0562167
	
	
	==> storage-provisioner [407ea665b7e8f145767d69bdfb3c6dfe3cdf59f7ffa57cbd4a6b1de3d011c021] <==
	W0908 12:40:03.386906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:05.389582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:05.396585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:07.399793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:07.405265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:09.408784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:09.413499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:11.417347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:11.424515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:13.428434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:13.433845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:15.437300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:15.442369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:17.445295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:17.449900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:19.453124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:19.460583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:21.467461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:21.471947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:23.475404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:23.482126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:25.485404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:25.489818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:27.493912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0908 12:40:27.503284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-090979 -n addons-090979
helpers_test.go:269: (dbg) Run:  kubectl --context addons-090979 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-7phvc ingress-nginx-admission-patch-5bwmx
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-090979 describe pod ingress-nginx-admission-create-7phvc ingress-nginx-admission-patch-5bwmx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-090979 describe pod ingress-nginx-admission-create-7phvc ingress-nginx-admission-patch-5bwmx: exit status 1 (90.210133ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-7phvc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5bwmx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-090979 describe pod ingress-nginx-admission-create-7phvc ingress-nginx-admission-patch-5bwmx: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-090979 addons disable ingress-dns --alsologtostderr -v=1: (1.727679612s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-090979 addons disable ingress --alsologtostderr -v=1: (7.836689718s)
--- FAIL: TestAddons/parallel/Ingress (153.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (604.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-491794 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-491794 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-595vq" [21f59fc5-8b1f-4da0-b38c-5b44a8d855c5] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-491794 -n functional-491794
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-08 12:54:44.548115658 +0000 UTC m=+1285.020531841
functional_test.go:1645: (dbg) Run:  kubectl --context functional-491794 describe po hello-node-connect-7d85dfc575-595vq -n default
functional_test.go:1645: (dbg) kubectl --context functional-491794 describe po hello-node-connect-7d85dfc575-595vq -n default:
Name:             hello-node-connect-7d85dfc575-595vq
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-491794/192.168.49.2
Start Time:       Mon, 08 Sep 2025 12:44:44 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h4wq6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-h4wq6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-595vq to functional-491794
Normal   Pulling    6m58s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 9m57s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m58s (x5 over 9m57s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 9m56s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 9m56s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-491794 logs hello-node-connect-7d85dfc575-595vq -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-491794 logs hello-node-connect-7d85dfc575-595vq -n default: exit status 1 (96.786632ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-595vq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-491794 logs hello-node-connect-7d85dfc575-595vq -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-491794 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-595vq
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-491794/192.168.49.2
Start Time:       Mon, 08 Sep 2025 12:44:44 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h4wq6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-h4wq6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-595vq to functional-491794
Normal   Pulling    6m58s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 9m57s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     6m58s (x5 over 9m57s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 9m56s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 9m56s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-491794 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-491794 logs -l app=hello-node-connect: exit status 1 (83.987216ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-595vq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-491794 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-491794 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.180.91
IPs:                      10.111.180.91
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31501/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-491794
helpers_test.go:243: (dbg) docker inspect functional-491794:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5be19b3789eac5d587752410c90066e1267adde220db9df886e414ca3e03d2e0",
	        "Created": "2025-09-08T12:41:50.178492422Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 579169,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-08T12:41:50.235330822Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a6e5b410fd9226cf2434621073598c7c01bccc994a53260ab0a0d834a0f1815",
	        "ResolvConfPath": "/var/lib/docker/containers/5be19b3789eac5d587752410c90066e1267adde220db9df886e414ca3e03d2e0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5be19b3789eac5d587752410c90066e1267adde220db9df886e414ca3e03d2e0/hostname",
	        "HostsPath": "/var/lib/docker/containers/5be19b3789eac5d587752410c90066e1267adde220db9df886e414ca3e03d2e0/hosts",
	        "LogPath": "/var/lib/docker/containers/5be19b3789eac5d587752410c90066e1267adde220db9df886e414ca3e03d2e0/5be19b3789eac5d587752410c90066e1267adde220db9df886e414ca3e03d2e0-json.log",
	        "Name": "/functional-491794",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-491794:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-491794",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5be19b3789eac5d587752410c90066e1267adde220db9df886e414ca3e03d2e0",
	                "LowerDir": "/var/lib/docker/overlay2/d0dd04ac2117fb6844cfb4c8913a6d61a2392a6fc637e80ad288016c36dc4c47-init/diff:/var/lib/docker/overlay2/194ba2667b0da80d09d69a06dabfcbc80057d4e7ee5de99b71c65d9470b74398/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d0dd04ac2117fb6844cfb4c8913a6d61a2392a6fc637e80ad288016c36dc4c47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d0dd04ac2117fb6844cfb4c8913a6d61a2392a6fc637e80ad288016c36dc4c47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d0dd04ac2117fb6844cfb4c8913a6d61a2392a6fc637e80ad288016c36dc4c47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-491794",
	                "Source": "/var/lib/docker/volumes/functional-491794/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-491794",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-491794",
	                "name.minikube.sigs.k8s.io": "functional-491794",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8560bbf38d161b6a84f29a45d35df52dff150848e3611b6cd96f1624bffb5b92",
	            "SandboxKey": "/var/run/docker/netns/8560bbf38d16",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33516"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33517"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-491794": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:ad:8b:29:c7:d4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5bb4dd711452d4fcd565c17153ff673aa44c049262a5d384cb6514828fb89f5",
	                    "EndpointID": "6600fbc0738cad0c8ad93860d301ea58347d9faa50b5d3b402e208ceb3332c7b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-491794",
	                        "5be19b3789ea"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-491794 -n functional-491794
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-491794 logs -n 25: (1.7490876s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-491794 image load --daemon kicbase/echo-server:functional-491794 --alsologtostderr                                                             │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ ssh     │ functional-491794 ssh sudo cat /usr/share/ca-certificates/560849.pem                                                                                      │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ ssh     │ functional-491794 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ ssh     │ functional-491794 ssh sudo cat /etc/ssl/certs/5608492.pem                                                                                                 │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ ssh     │ functional-491794 ssh sudo cat /usr/share/ca-certificates/5608492.pem                                                                                     │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ image   │ functional-491794 image ls                                                                                                                                │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ image   │ functional-491794 image load --daemon kicbase/echo-server:functional-491794 --alsologtostderr                                                             │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ ssh     │ functional-491794 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ ssh     │ functional-491794 ssh sudo cat /etc/test/nested/copy/560849/hosts                                                                                         │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ image   │ functional-491794 image ls                                                                                                                                │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ image   │ functional-491794 image load --daemon kicbase/echo-server:functional-491794 --alsologtostderr                                                             │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ image   │ functional-491794 image ls                                                                                                                                │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ ssh     │ functional-491794 ssh echo hello                                                                                                                          │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ image   │ functional-491794 image save kicbase/echo-server:functional-491794 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ ssh     │ functional-491794 ssh cat /etc/hostname                                                                                                                   │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ tunnel  │ functional-491794 tunnel --alsologtostderr                                                                                                                │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │                     │
	│ tunnel  │ functional-491794 tunnel --alsologtostderr                                                                                                                │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │                     │
	│ image   │ functional-491794 image rm kicbase/echo-server:functional-491794 --alsologtostderr                                                                        │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ image   │ functional-491794 image ls                                                                                                                                │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ tunnel  │ functional-491794 tunnel --alsologtostderr                                                                                                                │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │                     │
	│ image   │ functional-491794 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ image   │ functional-491794 image ls                                                                                                                                │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ image   │ functional-491794 image save --daemon kicbase/echo-server:functional-491794 --alsologtostderr                                                             │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ addons  │ functional-491794 addons list                                                                                                                             │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	│ addons  │ functional-491794 addons list -o json                                                                                                                     │ functional-491794 │ jenkins │ v1.36.0 │ 08 Sep 25 12:44 UTC │ 08 Sep 25 12:44 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:43:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:43:41.719650  583922 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:43:41.719757  583922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:43:41.719761  583922 out.go:374] Setting ErrFile to fd 2...
	I0908 12:43:41.719765  583922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:43:41.720036  583922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
	I0908 12:43:41.720401  583922 out.go:368] Setting JSON to false
	I0908 12:43:41.721325  583922 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8774,"bootTime":1757326648,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 12:43:41.721423  583922 start.go:140] virtualization:  
	I0908 12:43:41.726870  583922 out.go:179] * [functional-491794] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 12:43:41.729859  583922 notify.go:220] Checking for updates...
	I0908 12:43:41.732799  583922 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 12:43:41.735716  583922 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:43:41.738701  583922 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig
	I0908 12:43:41.741707  583922 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube
	I0908 12:43:41.744598  583922 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 12:43:41.747528  583922 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:43:41.750999  583922 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:43:41.751099  583922 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:43:41.773856  583922 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:43:41.773957  583922 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:43:41.838855  583922 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-08 12:43:41.8283112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:
/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:43:41.838949  583922 docker.go:318] overlay module found
	I0908 12:43:41.842010  583922 out.go:179] * Using the docker driver based on existing profile
	I0908 12:43:41.844978  583922 start.go:304] selected driver: docker
	I0908 12:43:41.844986  583922 start.go:918] validating driver "docker" against &{Name:functional-491794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-491794 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:43:41.845092  583922 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:43:41.845206  583922 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:43:41.900712  583922 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-09-08 12:43:41.891595985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:43:41.901161  583922 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0908 12:43:41.901179  583922 cni.go:84] Creating CNI manager for ""
	I0908 12:43:41.901233  583922 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:43:41.901281  583922 start.go:348] cluster config:
	{Name:functional-491794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-491794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:43:41.906213  583922 out.go:179] * Starting "functional-491794" primary control-plane node in "functional-491794" cluster
	I0908 12:43:41.908963  583922 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 12:43:41.911812  583922 out.go:179] * Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:43:41.914608  583922 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:43:41.914658  583922 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-558996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0908 12:43:41.914666  583922 cache.go:58] Caching tarball of preloaded images
	I0908 12:43:41.914686  583922 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:43:41.914790  583922 preload.go:172] Found /home/jenkins/minikube-integration/21508-558996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0908 12:43:41.914799  583922 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 12:43:41.914956  583922 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/config.json ...
	I0908 12:43:41.934222  583922 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon, skipping pull
	I0908 12:43:41.934234  583922 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in daemon, skipping load
	I0908 12:43:41.934253  583922 cache.go:232] Successfully downloaded all kic artifacts
	I0908 12:43:41.934275  583922 start.go:360] acquireMachinesLock for functional-491794: {Name:mk553e2454b2ab84bfe2aff0b51f1c8cadf08235 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0908 12:43:41.934338  583922 start.go:364] duration metric: took 47.648µs to acquireMachinesLock for "functional-491794"
	I0908 12:43:41.934357  583922 start.go:96] Skipping create...Using existing machine configuration
	I0908 12:43:41.934362  583922 fix.go:54] fixHost starting: 
	I0908 12:43:41.934639  583922 cli_runner.go:164] Run: docker container inspect functional-491794 --format={{.State.Status}}
	I0908 12:43:41.951357  583922 fix.go:112] recreateIfNeeded on functional-491794: state=Running err=<nil>
	W0908 12:43:41.951387  583922 fix.go:138] unexpected machine state, will restart: <nil>
	I0908 12:43:41.954733  583922 out.go:252] * Updating the running docker "functional-491794" container ...
	I0908 12:43:41.954760  583922 machine.go:93] provisionDockerMachine start ...
	I0908 12:43:41.954841  583922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
	I0908 12:43:41.971928  583922 main.go:141] libmachine: Using SSH client type: native
	I0908 12:43:41.972318  583922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33514 <nil> <nil>}
	I0908 12:43:41.972325  583922 main.go:141] libmachine: About to run SSH command:
	hostname
	I0908 12:43:42.110897  583922 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-491794
	
	I0908 12:43:42.110915  583922 ubuntu.go:182] provisioning hostname "functional-491794"
	I0908 12:43:42.110993  583922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
	I0908 12:43:42.156133  583922 main.go:141] libmachine: Using SSH client type: native
	I0908 12:43:42.156479  583922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33514 <nil> <nil>}
	I0908 12:43:42.156508  583922 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-491794 && echo "functional-491794" | sudo tee /etc/hostname
	I0908 12:43:42.324437  583922 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-491794
	
	I0908 12:43:42.324506  583922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
	I0908 12:43:42.343629  583922 main.go:141] libmachine: Using SSH client type: native
	I0908 12:43:42.343966  583922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33514 <nil> <nil>}
	I0908 12:43:42.343984  583922 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-491794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-491794/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-491794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0908 12:43:42.470465  583922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0908 12:43:42.470481  583922 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21508-558996/.minikube CaCertPath:/home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21508-558996/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21508-558996/.minikube}
	I0908 12:43:42.470510  583922 ubuntu.go:190] setting up certificates
	I0908 12:43:42.470519  583922 provision.go:84] configureAuth start
	I0908 12:43:42.470578  583922 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-491794
	I0908 12:43:42.489442  583922 provision.go:143] copyHostCerts
	I0908 12:43:42.489505  583922 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-558996/.minikube/cert.pem, removing ...
	I0908 12:43:42.489528  583922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-558996/.minikube/cert.pem
	I0908 12:43:42.489605  583922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21508-558996/.minikube/cert.pem (1123 bytes)
	I0908 12:43:42.489707  583922 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-558996/.minikube/key.pem, removing ...
	I0908 12:43:42.489712  583922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-558996/.minikube/key.pem
	I0908 12:43:42.489737  583922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21508-558996/.minikube/key.pem (1675 bytes)
	I0908 12:43:42.489913  583922 exec_runner.go:144] found /home/jenkins/minikube-integration/21508-558996/.minikube/ca.pem, removing ...
	I0908 12:43:42.489919  583922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21508-558996/.minikube/ca.pem
	I0908 12:43:42.489956  583922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21508-558996/.minikube/ca.pem (1082 bytes)
	I0908 12:43:42.490074  583922 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21508-558996/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca-key.pem org=jenkins.functional-491794 san=[127.0.0.1 192.168.49.2 functional-491794 localhost minikube]
	I0908 12:43:43.453589  583922 provision.go:177] copyRemoteCerts
	I0908 12:43:43.453646  583922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0908 12:43:43.453692  583922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
	I0908 12:43:43.474468  583922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33514 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/functional-491794/id_rsa Username:docker}
	I0908 12:43:43.566752  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0908 12:43:43.592927  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0908 12:43:43.618977  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0908 12:43:43.643987  583922 provision.go:87] duration metric: took 1.173444386s to configureAuth
	I0908 12:43:43.644005  583922 ubuntu.go:206] setting minikube options for container-runtime
	I0908 12:43:43.644205  583922 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:43:43.644316  583922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
	I0908 12:43:43.662043  583922 main.go:141] libmachine: Using SSH client type: native
	I0908 12:43:43.662352  583922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef840] 0x3f2000 <nil>  [] 0s} 127.0.0.1 33514 <nil> <nil>}
	I0908 12:43:43.662364  583922 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0908 12:43:49.076454  583922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0908 12:43:49.076465  583922 machine.go:96] duration metric: took 7.121699451s to provisionDockerMachine
	I0908 12:43:49.076475  583922 start.go:293] postStartSetup for "functional-491794" (driver="docker")
	I0908 12:43:49.076485  583922 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0908 12:43:49.076545  583922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0908 12:43:49.076588  583922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
	I0908 12:43:49.093538  583922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33514 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/functional-491794/id_rsa Username:docker}
	I0908 12:43:49.182520  583922 ssh_runner.go:195] Run: cat /etc/os-release
	I0908 12:43:49.185483  583922 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0908 12:43:49.185506  583922 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0908 12:43:49.185514  583922 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0908 12:43:49.185520  583922 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0908 12:43:49.185529  583922 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-558996/.minikube/addons for local assets ...
	I0908 12:43:49.185581  583922 filesync.go:126] Scanning /home/jenkins/minikube-integration/21508-558996/.minikube/files for local assets ...
	I0908 12:43:49.185671  583922 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-558996/.minikube/files/etc/ssl/certs/5608492.pem -> 5608492.pem in /etc/ssl/certs
	I0908 12:43:49.185758  583922 filesync.go:149] local asset: /home/jenkins/minikube-integration/21508-558996/.minikube/files/etc/test/nested/copy/560849/hosts -> hosts in /etc/test/nested/copy/560849
	I0908 12:43:49.185826  583922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/560849
	I0908 12:43:49.194412  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/files/etc/ssl/certs/5608492.pem --> /etc/ssl/certs/5608492.pem (1708 bytes)
	I0908 12:43:49.219489  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/files/etc/test/nested/copy/560849/hosts --> /etc/test/nested/copy/560849/hosts (40 bytes)
	I0908 12:43:49.244276  583922 start.go:296] duration metric: took 167.785261ms for postStartSetup
	I0908 12:43:49.244356  583922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:43:49.244397  583922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
	I0908 12:43:49.262161  583922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33514 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/functional-491794/id_rsa Username:docker}
	I0908 12:43:49.351067  583922 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0908 12:43:49.355940  583922 fix.go:56] duration metric: took 7.421569312s for fixHost
	I0908 12:43:49.355955  583922 start.go:83] releasing machines lock for "functional-491794", held for 7.421609673s
	I0908 12:43:49.356025  583922 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-491794
	I0908 12:43:49.372419  583922 ssh_runner.go:195] Run: cat /version.json
	I0908 12:43:49.372474  583922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
	I0908 12:43:49.372715  583922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0908 12:43:49.372768  583922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
	I0908 12:43:49.393686  583922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33514 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/functional-491794/id_rsa Username:docker}
	I0908 12:43:49.407987  583922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33514 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/functional-491794/id_rsa Username:docker}
	I0908 12:43:49.481525  583922 ssh_runner.go:195] Run: systemctl --version
	I0908 12:43:49.609598  583922 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0908 12:43:49.757023  583922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0908 12:43:49.761573  583922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:43:49.770947  583922 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0908 12:43:49.771018  583922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0908 12:43:49.780677  583922 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0908 12:43:49.780692  583922 start.go:495] detecting cgroup driver to use...
	I0908 12:43:49.780734  583922 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0908 12:43:49.780787  583922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0908 12:43:49.794436  583922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0908 12:43:49.806734  583922 docker.go:218] disabling cri-docker service (if available) ...
	I0908 12:43:49.806790  583922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0908 12:43:49.820550  583922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0908 12:43:49.832796  583922 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0908 12:43:49.969066  583922 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0908 12:43:50.107861  583922 docker.go:234] disabling docker service ...
	I0908 12:43:50.107921  583922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0908 12:43:50.121442  583922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0908 12:43:50.134077  583922 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0908 12:43:50.256921  583922 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0908 12:43:50.381471  583922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0908 12:43:50.393572  583922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0908 12:43:50.411536  583922 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0908 12:43:50.411599  583922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:43:50.422197  583922 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0908 12:43:50.422259  583922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:43:50.432968  583922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:43:50.443747  583922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:43:50.454498  583922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0908 12:43:50.464792  583922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:43:50.475656  583922 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:43:50.485964  583922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0908 12:43:50.496615  583922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0908 12:43:50.505616  583922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0908 12:43:50.514416  583922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:43:50.632540  583922 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0908 12:43:54.902320  583922 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.269756752s)
	I0908 12:43:54.902338  583922 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0908 12:43:54.902391  583922 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0908 12:43:54.906060  583922 start.go:563] Will wait 60s for crictl version
	I0908 12:43:54.906116  583922 ssh_runner.go:195] Run: which crictl
	I0908 12:43:54.909562  583922 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0908 12:43:54.958378  583922 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0908 12:43:54.958456  583922 ssh_runner.go:195] Run: crio --version
	I0908 12:43:55.019368  583922 ssh_runner.go:195] Run: crio --version
	I0908 12:43:55.067232  583922 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0908 12:43:55.070345  583922 cli_runner.go:164] Run: docker network inspect functional-491794 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0908 12:43:55.088070  583922 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0908 12:43:55.095928  583922 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0908 12:43:55.098906  583922 kubeadm.go:875] updating cluster {Name:functional-491794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-491794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0908 12:43:55.099052  583922 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:43:55.099134  583922 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:43:55.145456  583922 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:43:55.145469  583922 crio.go:433] Images already preloaded, skipping extraction
	I0908 12:43:55.145524  583922 ssh_runner.go:195] Run: sudo crictl images --output json
	I0908 12:43:55.185140  583922 crio.go:514] all images are preloaded for cri-o runtime.
	I0908 12:43:55.185153  583922 cache_images.go:85] Images are preloaded, skipping loading
	I0908 12:43:55.185160  583922 kubeadm.go:926] updating node { 192.168.49.2 8441 v1.34.0 crio true true} ...
	I0908 12:43:55.185263  583922 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-491794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:functional-491794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0908 12:43:55.185349  583922 ssh_runner.go:195] Run: crio config
	I0908 12:43:55.235302  583922 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0908 12:43:55.235321  583922 cni.go:84] Creating CNI manager for ""
	I0908 12:43:55.235332  583922 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:43:55.235339  583922 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0908 12:43:55.235361  583922 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-491794 NodeName:functional-491794 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0908 12:43:55.235480  583922 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-491794"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0908 12:43:55.235549  583922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0908 12:43:55.245019  583922 binaries.go:44] Found k8s binaries, skipping transfer
	I0908 12:43:55.245083  583922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0908 12:43:55.254371  583922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0908 12:43:55.273717  583922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0908 12:43:55.291915  583922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I0908 12:43:55.310574  583922 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0908 12:43:55.314223  583922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0908 12:43:55.433137  583922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0908 12:43:55.445246  583922 certs.go:68] Setting up /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794 for IP: 192.168.49.2
	I0908 12:43:55.445258  583922 certs.go:194] generating shared ca certs ...
	I0908 12:43:55.445273  583922 certs.go:226] acquiring lock for ca certs: {Name:mk0ff9e19e9952011d1b6ccb4c93c3f59626ecb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:43:55.445420  583922 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21508-558996/.minikube/ca.key
	I0908 12:43:55.445460  583922 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21508-558996/.minikube/proxy-client-ca.key
	I0908 12:43:55.445466  583922 certs.go:256] generating profile certs ...
	I0908 12:43:55.445563  583922 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.key
	I0908 12:43:55.445611  583922 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/apiserver.key.5346c355
	I0908 12:43:55.445649  583922 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/proxy-client.key
	I0908 12:43:55.445762  583922 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/560849.pem (1338 bytes)
	W0908 12:43:55.445851  583922 certs.go:480] ignoring /home/jenkins/minikube-integration/21508-558996/.minikube/certs/560849_empty.pem, impossibly tiny 0 bytes
	I0908 12:43:55.445860  583922 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca-key.pem (1675 bytes)
	I0908 12:43:55.445887  583922 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/ca.pem (1082 bytes)
	I0908 12:43:55.445911  583922 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/cert.pem (1123 bytes)
	I0908 12:43:55.445932  583922 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-558996/.minikube/certs/key.pem (1675 bytes)
	I0908 12:43:55.445974  583922 certs.go:484] found cert: /home/jenkins/minikube-integration/21508-558996/.minikube/files/etc/ssl/certs/5608492.pem (1708 bytes)
	I0908 12:43:55.446593  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0908 12:43:55.474680  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0908 12:43:55.501519  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0908 12:43:55.526957  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0908 12:43:55.552097  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0908 12:43:55.577994  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0908 12:43:55.602400  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0908 12:43:55.627387  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0908 12:43:55.651954  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/certs/560849.pem --> /usr/share/ca-certificates/560849.pem (1338 bytes)
	I0908 12:43:55.676547  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/files/etc/ssl/certs/5608492.pem --> /usr/share/ca-certificates/5608492.pem (1708 bytes)
	I0908 12:43:55.701005  583922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21508-558996/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0908 12:43:55.725731  583922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0908 12:43:55.744258  583922 ssh_runner.go:195] Run: openssl version
	I0908 12:43:55.749714  583922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5608492.pem && ln -fs /usr/share/ca-certificates/5608492.pem /etc/ssl/certs/5608492.pem"
	I0908 12:43:55.759080  583922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5608492.pem
	I0908 12:43:55.762627  583922 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep  8 12:41 /usr/share/ca-certificates/5608492.pem
	I0908 12:43:55.762685  583922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5608492.pem
	I0908 12:43:55.769478  583922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5608492.pem /etc/ssl/certs/3ec20f2e.0"
	I0908 12:43:55.778337  583922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0908 12:43:55.787843  583922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:43:55.791650  583922 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  8 12:34 /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:43:55.791704  583922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0908 12:43:55.798834  583922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0908 12:43:55.807955  583922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/560849.pem && ln -fs /usr/share/ca-certificates/560849.pem /etc/ssl/certs/560849.pem"
	I0908 12:43:55.817213  583922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/560849.pem
	I0908 12:43:55.820806  583922 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep  8 12:41 /usr/share/ca-certificates/560849.pem
	I0908 12:43:55.820861  583922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/560849.pem
	I0908 12:43:55.827960  583922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/560849.pem /etc/ssl/certs/51391683.0"
	I0908 12:43:55.836895  583922 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0908 12:43:55.840355  583922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0908 12:43:55.847049  583922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0908 12:43:55.853656  583922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0908 12:43:55.860310  583922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0908 12:43:55.867165  583922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0908 12:43:55.873910  583922 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0908 12:43:55.880657  583922 kubeadm.go:392] StartCluster: {Name:functional-491794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-491794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:43:55.880739  583922 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0908 12:43:55.880809  583922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0908 12:43:55.918607  583922 cri.go:89] found id: "d2aa29454b3b171c566ab2cf146a757f906a4231ca43ede3af9b0f0d36a136de"
	I0908 12:43:55.918618  583922 cri.go:89] found id: "8073f48b48d7667f1f1fae97dce6f3e6f9692e1d063547f51d380a30aebcc1ba"
	I0908 12:43:55.918621  583922 cri.go:89] found id: "dc5abe6733b679c79e753d1efc9505133d656401ab95b55d69d5c33758afb437"
	I0908 12:43:55.918624  583922 cri.go:89] found id: "ec7c965df42e621c6f998810e040b1da5c5ed14a3a2dd0511bf3e1d933c9b8d1"
	I0908 12:43:55.918626  583922 cri.go:89] found id: "ef55303250750af65f30cf759e0c757e0d4a3e838c810a090a31f63b0cbae39b"
	I0908 12:43:55.918629  583922 cri.go:89] found id: "bc2ed35735ab2aaa7cda17082590013120adc1233891ec057f4dcb1a21e0525a"
	I0908 12:43:55.918631  583922 cri.go:89] found id: "276628d34fe6783bb969eca95152bbb240880c9a4ae428ff7ac10a52c7e02f61"
	I0908 12:43:55.918633  583922 cri.go:89] found id: "76bc05fd5985b0712a6a76174067b246ee3f9678fba011dae14217627ed11f4b"
	I0908 12:43:55.918635  583922 cri.go:89] found id: "a1535c3832502a3703e7f227eddb0e8ad5833684b3aae3de69449840ea7897b5"
	I0908 12:43:55.918641  583922 cri.go:89] found id: "017bd0919aec3aa6118129b86e00d968675f69f5cf3df58fd18da4be84e89a4b"
	I0908 12:43:55.918643  583922 cri.go:89] found id: "6189987e3867b56f571d9d35e3f7dad4d2799be212b819f14785830db460d311"
	I0908 12:43:55.918654  583922 cri.go:89] found id: "788d824fad57e0ccb428652e61c49bc55c22eda4da318dafe8511f017ffa1bb3"
	I0908 12:43:55.918656  583922 cri.go:89] found id: "1656a10ccb470cbd1b94cd088dad45f3206d156c799a5704c2e2ca73850bd427"
	I0908 12:43:55.918658  583922 cri.go:89] found id: "c51108d5fbbe06e08b84c51dc18d9f9ddfaeec2898f22d9fb7bd11f8078a19dd"
	I0908 12:43:55.918661  583922 cri.go:89] found id: "dcade3b440403db2b8ddc6d8937aa009733a197ddfa1cd290b60ba1c72a3ad05"
	I0908 12:43:55.918666  583922 cri.go:89] found id: "c6ef3ad23491b18c8d81b7505d53984d6817e965c607d0137348abfb0cc2115d"
	I0908 12:43:55.918668  583922 cri.go:89] found id: ""
	I0908 12:43:55.918718  583922 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-491794 -n functional-491794
helpers_test.go:269: (dbg) Run:  kubectl --context functional-491794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-bmfbl hello-node-connect-7d85dfc575-595vq
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-491794 describe pod hello-node-75c85bcc94-bmfbl hello-node-connect-7d85dfc575-595vq
helpers_test.go:290: (dbg) kubectl --context functional-491794 describe pod hello-node-75c85bcc94-bmfbl hello-node-connect-7d85dfc575-595vq:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-bmfbl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-491794/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 12:45:03 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5cqr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-j5cqr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m43s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bmfbl to functional-491794
	  Normal   Pulling    6m55s (x5 over 9m43s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m55s (x5 over 9m43s)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     6m55s (x5 over 9m43s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m41s (x20 over 9m43s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m26s (x21 over 9m43s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-595vq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-491794/192.168.49.2
	Start Time:       Mon, 08 Sep 2025 12:44:44 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h4wq6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-h4wq6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-595vq to functional-491794
	  Normal   Pulling    7m1s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m1s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m1s (x5 over 10m)      kubelet            Error: ErrImagePull
	  Normal   BackOff    4m50s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m50s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (604.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-491794 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-491794 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-bmfbl" [44df4bf4-04a1-415d-888e-561bb76da133] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E0908 12:46:29.299067  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:46:57.013557  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:51:29.298665  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-491794 -n functional-491794
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-08 12:55:04.483997899 +0000 UTC m=+1304.956414082
functional_test.go:1460: (dbg) Run:  kubectl --context functional-491794 describe po hello-node-75c85bcc94-bmfbl -n default
functional_test.go:1460: (dbg) kubectl --context functional-491794 describe po hello-node-75c85bcc94-bmfbl -n default:
Name:             hello-node-75c85bcc94-bmfbl
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-491794/192.168.49.2
Start Time:       Mon, 08 Sep 2025 12:45:03 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j5cqr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-j5cqr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bmfbl to functional-491794
Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-491794 logs hello-node-75c85bcc94-bmfbl -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-491794 logs hello-node-75c85bcc94-bmfbl -n default: exit status 1 (90.192015ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-bmfbl" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-491794 logs hello-node-75c85bcc94-bmfbl -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491794 service --namespace=default --https --url hello-node: exit status 115 (568.556978ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30628
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-491794 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491794 service hello-node --url --format={{.IP}}: exit status 115 (554.322656ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-491794 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491794 service hello-node --url: exit status 115 (537.993945ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30628
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-491794 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30628
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    

Test pass (294/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.04
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.21
9 TestDownloadOnly/v1.28.0/DeleteAll 0.41
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.2
12 TestDownloadOnly/v1.34.0/json-events 5.65
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.22
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 174.45
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 11.92
35 TestAddons/parallel/Registry 17.5
36 TestAddons/parallel/RegistryCreds 0.74
38 TestAddons/parallel/InspektorGadget 6.28
39 TestAddons/parallel/MetricsServer 6.88
41 TestAddons/parallel/CSI 40.17
42 TestAddons/parallel/Headlamp 17.92
43 TestAddons/parallel/CloudSpanner 6.62
44 TestAddons/parallel/LocalPath 52.1
45 TestAddons/parallel/NvidiaDevicePlugin 6.65
46 TestAddons/parallel/Yakd 11.94
48 TestAddons/StoppedEnableDisable 12.2
49 TestCertOptions 44.26
50 TestCertExpiration 259.72
52 TestForceSystemdFlag 39.23
53 TestForceSystemdEnv 38.71
59 TestErrorSpam/setup 33.96
60 TestErrorSpam/start 0.83
61 TestErrorSpam/status 1.14
62 TestErrorSpam/pause 1.89
63 TestErrorSpam/unpause 1.94
64 TestErrorSpam/stop 1.55
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.46
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 28.21
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.82
76 TestFunctional/serial/CacheCmd/cache/add_local 1.43
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
80 TestFunctional/serial/CacheCmd/cache/cache_reload 2.16
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 37.74
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.75
87 TestFunctional/serial/LogsFileCmd 1.77
88 TestFunctional/serial/InvalidService 4.29
90 TestFunctional/parallel/ConfigCmd 0.53
91 TestFunctional/parallel/DashboardCmd 9.57
92 TestFunctional/parallel/DryRun 0.47
93 TestFunctional/parallel/InternationalLanguage 0.21
94 TestFunctional/parallel/StatusCmd 1.06
99 TestFunctional/parallel/AddonsCmd 0.19
100 TestFunctional/parallel/PersistentVolumeClaim 27.02
102 TestFunctional/parallel/SSHCmd 0.69
103 TestFunctional/parallel/CpCmd 2.06
105 TestFunctional/parallel/FileSync 0.34
106 TestFunctional/parallel/CertSync 2.17
110 TestFunctional/parallel/NodeLabels 0.1
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
114 TestFunctional/parallel/License 0.36
115 TestFunctional/parallel/Version/short 0.09
116 TestFunctional/parallel/Version/components 1.39
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.05
122 TestFunctional/parallel/ImageCommands/Setup 0.7
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.62
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.06
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
130 TestFunctional/parallel/ProfileCmd/profile_list 0.56
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.7
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.67
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.81
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.49
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.95
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.63
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
148 TestFunctional/parallel/MountCmd/any-port 8.83
149 TestFunctional/parallel/MountCmd/specific-port 1.73
150 TestFunctional/parallel/MountCmd/VerifyCleanup 2.48
151 TestFunctional/parallel/ServiceCmd/List 1.34
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.4
156 TestFunctional/delete_echo-server_images 0.05
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 196.31
164 TestMultiControlPlane/serial/DeployApp 9.42
165 TestMultiControlPlane/serial/PingHostFromPods 1.74
166 TestMultiControlPlane/serial/AddWorkerNode 32.84
167 TestMultiControlPlane/serial/NodeLabels 0.11
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.01
169 TestMultiControlPlane/serial/CopyFile 19.44
170 TestMultiControlPlane/serial/StopSecondaryNode 12.72
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
172 TestMultiControlPlane/serial/RestartSecondaryNode 34.18
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.3
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 116.73
175 TestMultiControlPlane/serial/DeleteSecondaryNode 13.02
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
177 TestMultiControlPlane/serial/StopCluster 35.93
178 TestMultiControlPlane/serial/RestartCluster 87.57
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.99
180 TestMultiControlPlane/serial/AddSecondaryNode 63.77
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
185 TestJSONOutput/start/Command 79.01
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.79
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.68
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.84
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.28
210 TestKicCustomNetwork/create_custom_network 42.49
211 TestKicCustomNetwork/use_default_bridge_network 35.85
212 TestKicExistingNetwork 31.01
213 TestKicCustomSubnet 34.8
214 TestKicStaticIP 34.23
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 74.83
219 TestMountStart/serial/StartWithMountFirst 6.79
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 6.05
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.62
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 8.55
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 138.09
231 TestMultiNode/serial/DeployApp2Nodes 6.68
232 TestMultiNode/serial/PingHostFrom2Pods 1
233 TestMultiNode/serial/AddNode 57.3
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.68
236 TestMultiNode/serial/CopyFile 10.26
237 TestMultiNode/serial/StopNode 2.27
238 TestMultiNode/serial/StartAfterStop 7.82
239 TestMultiNode/serial/RestartKeepsNodes 78.75
240 TestMultiNode/serial/DeleteNode 5.6
241 TestMultiNode/serial/StopMultiNode 23.86
242 TestMultiNode/serial/RestartMultiNode 54.38
243 TestMultiNode/serial/ValidateNameConflict 35.05
248 TestPreload 134.45
250 TestScheduledStopUnix 109.83
253 TestInsufficientStorage 11.91
254 TestRunningBinaryUpgrade 56.31
256 TestKubernetesUpgrade 364.55
257 TestMissingContainerUpgrade 123.61
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 49.35
261 TestNoKubernetes/serial/StartWithStopK8s 14.57
262 TestNoKubernetes/serial/Start 9.45
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
264 TestNoKubernetes/serial/ProfileList 0.7
265 TestNoKubernetes/serial/Stop 1.2
266 TestNoKubernetes/serial/StartNoArgs 6.76
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
268 TestStoppedBinaryUpgrade/Setup 1.11
269 TestStoppedBinaryUpgrade/Upgrade 60.43
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
279 TestPause/serial/Start 80.28
280 TestPause/serial/SecondStartNoReconfiguration 17.53
281 TestPause/serial/Pause 0.79
282 TestPause/serial/VerifyStatus 0.31
283 TestPause/serial/Unpause 0.69
284 TestPause/serial/PauseAgain 0.87
285 TestPause/serial/DeletePaused 2.69
286 TestPause/serial/VerifyDeletedResources 13.41
294 TestNetworkPlugins/group/false 5.22
299 TestStartStop/group/old-k8s-version/serial/FirstStart 62.47
300 TestStartStop/group/old-k8s-version/serial/DeployApp 10.41
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.17
302 TestStartStop/group/old-k8s-version/serial/Stop 11.95
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
304 TestStartStop/group/old-k8s-version/serial/SecondStart 52.37
305 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
306 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
307 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
308 TestStartStop/group/old-k8s-version/serial/Pause 3.27
310 TestStartStop/group/no-preload/serial/FirstStart 72.36
312 TestStartStop/group/embed-certs/serial/FirstStart 79.89
313 TestStartStop/group/no-preload/serial/DeployApp 10.42
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.25
315 TestStartStop/group/no-preload/serial/Stop 12.06
316 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
317 TestStartStop/group/no-preload/serial/SecondStart 55.4
318 TestStartStop/group/embed-certs/serial/DeployApp 11.35
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.31
320 TestStartStop/group/embed-certs/serial/Stop 11.99
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
323 TestStartStop/group/embed-certs/serial/SecondStart 53.39
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
326 TestStartStop/group/no-preload/serial/Pause 4.72
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.5
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
332 TestStartStop/group/embed-certs/serial/Pause 3.12
334 TestStartStop/group/newest-cni/serial/FirstStart 36.07
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.44
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
338 TestStartStop/group/newest-cni/serial/Stop 1.23
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/newest-cni/serial/SecondStart 17.4
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.53
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.34
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
346 TestStartStop/group/newest-cni/serial/Pause 3.12
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.32
348 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 64.02
349 TestNetworkPlugins/group/auto/Start 87.4
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.14
354 TestNetworkPlugins/group/kindnet/Start 87.95
355 TestNetworkPlugins/group/auto/KubeletFlags 0.29
356 TestNetworkPlugins/group/auto/NetCatPod 11.27
357 TestNetworkPlugins/group/auto/DNS 0.27
358 TestNetworkPlugins/group/auto/Localhost 0.23
359 TestNetworkPlugins/group/auto/HairPin 0.2
360 TestNetworkPlugins/group/calico/Start 60.97
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
363 TestNetworkPlugins/group/kindnet/NetCatPod 11.32
364 TestNetworkPlugins/group/kindnet/DNS 0.28
365 TestNetworkPlugins/group/kindnet/Localhost 0.23
366 TestNetworkPlugins/group/kindnet/HairPin 0.26
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/calico/KubeletFlags 0.4
369 TestNetworkPlugins/group/calico/NetCatPod 13.45
370 TestNetworkPlugins/group/calico/DNS 0.23
371 TestNetworkPlugins/group/calico/Localhost 0.42
372 TestNetworkPlugins/group/calico/HairPin 0.31
373 TestNetworkPlugins/group/custom-flannel/Start 61.28
374 TestNetworkPlugins/group/enable-default-cni/Start 79.11
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.31
377 TestNetworkPlugins/group/custom-flannel/DNS 0.19
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
380 TestNetworkPlugins/group/flannel/Start 92.55
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.32
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.26
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
386 TestNetworkPlugins/group/bridge/Start 72.8
387 TestNetworkPlugins/group/flannel/ControllerPod 6.01
388 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
389 TestNetworkPlugins/group/flannel/NetCatPod 11.29
390 TestNetworkPlugins/group/flannel/DNS 0.18
391 TestNetworkPlugins/group/flannel/Localhost 0.15
392 TestNetworkPlugins/group/flannel/HairPin 0.17
393 TestNetworkPlugins/group/bridge/KubeletFlags 0.44
394 TestNetworkPlugins/group/bridge/NetCatPod 11.42
395 TestNetworkPlugins/group/bridge/DNS 0.27
396 TestNetworkPlugins/group/bridge/Localhost 0.2
397 TestNetworkPlugins/group/bridge/HairPin 0.2
x
+
TestDownloadOnly/v1.28.0/json-events (6.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-451816 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-451816 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.040679383s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0908 12:33:25.613727  560849 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0908 12:33:25.613839  560849 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-558996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-451816
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-451816: exit status 85 (205.711698ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-451816 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-451816 │ jenkins │ v1.36.0 │ 08 Sep 25 12:33 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:33:19
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:33:19.618535  560854 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:33:19.618665  560854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:33:19.618678  560854 out.go:374] Setting ErrFile to fd 2...
	I0908 12:33:19.618683  560854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:33:19.618952  560854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
	W0908 12:33:19.619110  560854 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21508-558996/.minikube/config/config.json: open /home/jenkins/minikube-integration/21508-558996/.minikube/config/config.json: no such file or directory
	I0908 12:33:19.619521  560854 out.go:368] Setting JSON to true
	I0908 12:33:19.620393  560854 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8152,"bootTime":1757326648,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 12:33:19.620460  560854 start.go:140] virtualization:  
	I0908 12:33:19.622472  560854 out.go:99] [download-only-451816] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	W0908 12:33:19.622660  560854 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21508-558996/.minikube/cache/preloaded-tarball: no such file or directory
	I0908 12:33:19.622764  560854 notify.go:220] Checking for updates...
	I0908 12:33:19.624814  560854 out.go:171] MINIKUBE_LOCATION=21508
	I0908 12:33:19.627269  560854 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:33:19.628533  560854 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig
	I0908 12:33:19.629698  560854 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube
	I0908 12:33:19.630918  560854 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 12:33:19.633402  560854 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 12:33:19.633753  560854 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:33:19.663389  560854 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:33:19.663498  560854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:33:19.728918  560854 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 12:33:19.719446342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:33:19.729027  560854 docker.go:318] overlay module found
	I0908 12:33:19.730444  560854 out.go:99] Using the docker driver based on user configuration
	I0908 12:33:19.730481  560854 start.go:304] selected driver: docker
	I0908 12:33:19.730496  560854 start.go:918] validating driver "docker" against <nil>
	I0908 12:33:19.730594  560854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:33:19.785654  560854 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-08 12:33:19.776064424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:33:19.785871  560854 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 12:33:19.786157  560854 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 12:33:19.786353  560854 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 12:33:19.788014  560854 out.go:171] Using Docker driver with root privileges
	I0908 12:33:19.789338  560854 cni.go:84] Creating CNI manager for ""
	I0908 12:33:19.789405  560854 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:33:19.789416  560854 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 12:33:19.789489  560854 start.go:348] cluster config:
	{Name:download-only-451816 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-451816 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:33:19.790917  560854 out.go:99] Starting "download-only-451816" primary control-plane node in "download-only-451816" cluster
	I0908 12:33:19.790941  560854 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 12:33:19.792174  560854 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:33:19.792203  560854 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 12:33:19.792363  560854 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:33:19.808726  560854 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 12:33:19.808945  560854 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 12:33:19.809044  560854 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 12:33:19.853360  560854 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0908 12:33:19.853384  560854 cache.go:58] Caching tarball of preloaded images
	I0908 12:33:19.853559  560854 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0908 12:33:19.855050  560854 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0908 12:33:19.855071  560854 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 12:33:19.944898  560854 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21508-558996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I0908 12:33:23.863975  560854 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 12:33:23.864072  560854 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21508-558996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-451816 host does not exist
	  To start a cluster, run: "minikube start -p download-only-451816"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-451816
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (5.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-660430 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-660430 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.653682768s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (5.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0908 12:33:32.084664  560849 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0908 12:33:32.084733  560849 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21508-558996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-660430
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-660430: exit status 85 (68.688825ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-451816 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-451816 │ jenkins │ v1.36.0 │ 08 Sep 25 12:33 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.36.0 │ 08 Sep 25 12:33 UTC │ 08 Sep 25 12:33 UTC │
	│ delete  │ -p download-only-451816                                                                                                                                                   │ download-only-451816 │ jenkins │ v1.36.0 │ 08 Sep 25 12:33 UTC │ 08 Sep 25 12:33 UTC │
	│ start   │ -o=json --download-only -p download-only-660430 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-660430 │ jenkins │ v1.36.0 │ 08 Sep 25 12:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/08 12:33:26
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0908 12:33:26.481267  561050 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:33:26.481424  561050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:33:26.481467  561050 out.go:374] Setting ErrFile to fd 2...
	I0908 12:33:26.481478  561050 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:33:26.481803  561050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
	I0908 12:33:26.482299  561050 out.go:368] Setting JSON to true
	I0908 12:33:26.483222  561050 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8159,"bootTime":1757326648,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 12:33:26.483293  561050 start.go:140] virtualization:  
	I0908 12:33:26.489225  561050 out.go:99] [download-only-660430] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 12:33:26.489564  561050 notify.go:220] Checking for updates...
	I0908 12:33:26.494929  561050 out.go:171] MINIKUBE_LOCATION=21508
	I0908 12:33:26.500530  561050 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:33:26.506188  561050 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig
	I0908 12:33:26.507327  561050 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube
	I0908 12:33:26.508585  561050 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0908 12:33:26.510767  561050 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0908 12:33:26.511049  561050 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:33:26.542328  561050 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:33:26.542447  561050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:33:26.599839  561050 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-08 12:33:26.590525383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:33:26.599948  561050 docker.go:318] overlay module found
	I0908 12:33:26.601440  561050 out.go:99] Using the docker driver based on user configuration
	I0908 12:33:26.601483  561050 start.go:304] selected driver: docker
	I0908 12:33:26.601505  561050 start.go:918] validating driver "docker" against <nil>
	I0908 12:33:26.601624  561050 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:33:26.671675  561050 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:48 SystemTime:2025-09-08 12:33:26.662188708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:33:26.671839  561050 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0908 12:33:26.672153  561050 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0908 12:33:26.672320  561050 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0908 12:33:26.673983  561050 out.go:171] Using Docker driver with root privileges
	I0908 12:33:26.675191  561050 cni.go:84] Creating CNI manager for ""
	I0908 12:33:26.675267  561050 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0908 12:33:26.675280  561050 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0908 12:33:26.675369  561050 start.go:348] cluster config:
	{Name:download-only-660430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-660430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:33:26.676624  561050 out.go:99] Starting "download-only-660430" primary control-plane node in "download-only-660430" cluster
	I0908 12:33:26.676655  561050 cache.go:123] Beginning downloading kic base image for docker with crio
	I0908 12:33:26.677875  561050 out.go:99] Pulling base image v0.0.47-1756980985-21488 ...
	I0908 12:33:26.677909  561050 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:33:26.678099  561050 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local docker daemon
	I0908 12:33:26.694388  561050 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 to local cache
	I0908 12:33:26.694534  561050 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory
	I0908 12:33:26.694560  561050 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 in local cache directory, skipping pull
	I0908 12:33:26.694570  561050 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 exists in cache, skipping pull
	I0908 12:33:26.694579  561050 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 as a tarball
	I0908 12:33:26.735437  561050 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0908 12:33:26.735467  561050 cache.go:58] Caching tarball of preloaded images
	I0908 12:33:26.735640  561050 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:33:26.736955  561050 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0908 12:33:26.736997  561050 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 12:33:26.827761  561050 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:36555bb244eebf6e383c5e8810b48b3a -> /home/jenkins/minikube-integration/21508-558996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4
	I0908 12:33:30.477462  561050 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 12:33:30.477568  561050 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21508-558996/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-arm64.tar.lz4 ...
	I0908 12:33:31.420879  561050 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0908 12:33:31.421242  561050 profile.go:143] Saving config to /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/download-only-660430/config.json ...
	I0908 12:33:31.421278  561050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/download-only-660430/config.json: {Name:mk772be298908d2ca5f7262b78a02c4929cac89c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0908 12:33:31.421468  561050 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0908 12:33:31.421627  561050 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21508-558996/.minikube/cache/linux/arm64/v1.34.0/kubectl
	
	
	* The control-plane node download-only-660430 host does not exist
	  To start a cluster, run: "minikube start -p download-only-660430"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-660430
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I0908 12:33:33.365676  560849 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-724699 --alsologtostderr --binary-mirror http://127.0.0.1:41151 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-724699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-724699
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-090979
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-090979: exit status 85 (70.676914ms)

                                                
                                                
-- stdout --
	* Profile "addons-090979" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-090979"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-090979
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-090979: exit status 85 (82.298292ms)

                                                
                                                
-- stdout --
	* Profile "addons-090979" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-090979"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (174.45s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-090979 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-090979 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m54.446599026s)
--- PASS: TestAddons/Setup (174.45s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-090979 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-090979 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.92s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-090979 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-090979 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4378c9d2-4ca0-416e-bdea-0bf80c3d755b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4378c9d2-4ca0-416e-bdea-0bf80c3d755b] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.00375463s
addons_test.go:694: (dbg) Run:  kubectl --context addons-090979 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-090979 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-090979 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-090979 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.92s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 14.877555ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-brlbg" [00043075-d8d2-4dc6-b57e-cecbd79fd981] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003871452s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-9pffk" [d9dbf333-7861-40eb-ab83-fc6661520da1] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003459626s
addons_test.go:392: (dbg) Run:  kubectl --context addons-090979 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-090979 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-090979 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.413020811s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 ip
2025/09/08 12:37:06 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.50s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.374499ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-090979
addons_test.go:332: (dbg) Run:  kubectl --context addons-090979 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-hfwws" [6933d866-ae82-4d32-ac31-40a3e091c52e] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003534221s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.203076ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-p5sf7" [2ab85486-40f7-420a-a23a-20e524ee6bd9] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003148546s
addons_test.go:463: (dbg) Run:  kubectl --context addons-090979 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0908 12:37:32.405420  560849 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0908 12:37:32.409856  560849 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0908 12:37:32.409889  560849 kapi.go:107] duration metric: took 8.155521ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.170322ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-090979 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-090979 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [27eeee16-1692-436f-bc38-4fb690c4ec27] Pending
helpers_test.go:352: "task-pv-pod" [27eeee16-1692-436f-bc38-4fb690c4ec27] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [27eeee16-1692-436f-bc38-4fb690c4ec27] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.00294206s
addons_test.go:572: (dbg) Run:  kubectl --context addons-090979 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-090979 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-090979 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-090979 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-090979 delete pod task-pv-pod: (1.141596421s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-090979 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-090979 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-090979 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [a8ceabcf-3f89-426d-a767-11841080ce92] Pending
helpers_test.go:352: "task-pv-pod-restore" [a8ceabcf-3f89-426d-a767-11841080ce92] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [a8ceabcf-3f89-426d-a767-11841080ce92] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004031178s
addons_test.go:614: (dbg) Run:  kubectl --context addons-090979 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-090979 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-090979 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-090979 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.026160954s)
--- PASS: TestAddons/parallel/CSI (40.17s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-090979 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-090979 --alsologtostderr -v=1: (1.000863195s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-vk4wh" [f82ebfc5-3109-4843-87ab-c6876721f1e3] Pending
helpers_test.go:352: "headlamp-85f8f8dc54-vk4wh" [f82ebfc5-3109-4843-87ab-c6876721f1e3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-vk4wh" [f82ebfc5-3109-4843-87ab-c6876721f1e3] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003954514s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-090979 addons disable headlamp --alsologtostderr -v=1: (5.913772115s)
--- PASS: TestAddons/parallel/Headlamp (17.92s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-c55d4cb6d-vmvp2" [bcfe7ebf-65cc-4d1c-be6a-ed2100f3a6c5] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004093021s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-090979 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-090979 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-090979 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [c2033905-1e69-46bc-bd26-daf4f50db782] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [c2033905-1e69-46bc-bd26-daf4f50db782] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [c2033905-1e69-46bc-bd26-daf4f50db782] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003198787s
addons_test.go:967: (dbg) Run:  kubectl --context addons-090979 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 ssh "cat /opt/local-path-provisioner/pvc-9785f1bd-055d-44c5-a947-2a5a3a5a8e1c_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-090979 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-090979 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-090979 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.980202031s)
--- PASS: TestAddons/parallel/LocalPath (52.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-8qq6w" [b6281aed-d538-40d1-9efe-6f733a1faf5f] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0030487s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-nnpff" [6a2f61bd-4002-46b3-8584-08c8df93118c] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004133053s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-090979 addons disable yakd --alsologtostderr -v=1: (5.93616823s)
--- PASS: TestAddons/parallel/Yakd (11.94s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-090979
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-090979: (11.909476612s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-090979
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-090979
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-090979
--- PASS: TestAddons/StoppedEnableDisable (12.20s)

                                                
                                    
x
+
TestCertOptions (44.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-124649 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E0908 13:31:12.385839  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:31:29.298983  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-124649 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (41.575755433s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-124649 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-124649 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-124649 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-124649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-124649
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-124649: (1.991047535s)
--- PASS: TestCertOptions (44.26s)

                                                
                                    
x
+
TestCertExpiration (259.72s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-584837 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-584837 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (44.995482574s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-584837 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-584837 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (32.165927365s)
helpers_test.go:175: Cleaning up "cert-expiration-584837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-584837
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-584837: (2.558510864s)
--- PASS: TestCertExpiration (259.72s)

                                                
                                    
x
+
TestForceSystemdFlag (39.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-482533 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0908 13:29:34.965718  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-482533 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.064223227s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-482533 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-482533" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-482533
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-482533: (2.794788079s)
--- PASS: TestForceSystemdFlag (39.23s)

                                                
                                    
x
+
TestForceSystemdEnv (38.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-091954 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-091954 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.652772912s)
helpers_test.go:175: Cleaning up "force-systemd-env-091954" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-091954
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-091954: (3.054026434s)
--- PASS: TestForceSystemdEnv (38.71s)

                                                
                                    
x
+
TestErrorSpam/setup (33.96s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-656432 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-656432 --driver=docker  --container-runtime=crio
E0908 12:41:29.310273  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:29.317273  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:29.328735  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:29.350205  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:29.391673  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:29.473183  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:29.634711  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:29.956441  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:30.598085  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:41:31.879457  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-656432 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-656432 --driver=docker  --container-runtime=crio: (33.960436033s)
--- PASS: TestErrorSpam/setup (33.96s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 pause
E0908 12:41:34.440716  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 pause
--- PASS: TestErrorSpam/pause (1.89s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 unpause
--- PASS: TestErrorSpam/unpause (1.94s)

                                                
                                    
x
+
TestErrorSpam/stop (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 stop: (1.343773504s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-656432 --log_dir /tmp/nospam-656432 stop
E0908 12:41:39.562366  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorSpam/stop (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21508-558996/.minikube/files/etc/test/nested/copy/560849/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-491794 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0908 12:41:49.804203  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:42:10.285890  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:42:51.247202  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-491794 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.457179148s)
--- PASS: TestFunctional/serial/StartWithProxy (80.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0908 12:43:05.071947  560849 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-491794 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-491794 --alsologtostderr -v=8: (28.205797132s)
functional_test.go:678: soft start took 28.20841579s for "functional-491794" cluster.
I0908 12:43:33.278083  560849 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (28.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-491794 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-491794 cache add registry.k8s.io/pause:3.1: (1.289572189s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-491794 cache add registry.k8s.io/pause:3.3: (1.291110053s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-491794 cache add registry.k8s.io/pause:latest: (1.240128304s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-491794 /tmp/TestFunctionalserialCacheCmdcacheadd_local1830558133/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 cache add minikube-local-cache-test:functional-491794
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 cache delete minikube-local-cache-test:functional-491794
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-491794
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491794 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (301.224805ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-arm64 -p functional-491794 cache reload: (1.181108195s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 kubectl -- --context functional-491794 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-491794 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-491794 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0908 12:44:13.171549  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-491794 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.741524574s)
functional_test.go:776: restart took 37.741628502s for "functional-491794" cluster.
I0908 12:44:19.411593  560849 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (37.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-491794 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-491794 logs: (1.749486431s)
--- PASS: TestFunctional/serial/LogsCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 logs --file /tmp/TestFunctionalserialLogsFileCmd4224124239/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-491794 logs --file /tmp/TestFunctionalserialLogsFileCmd4224124239/001/logs.txt: (1.769690809s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-491794 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-491794
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-491794: exit status 115 (379.795991ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31892 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-491794 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491794 config get cpus: exit status 14 (81.834933ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491794 config get cpus: exit status 14 (114.020455ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-491794 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-491794 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 591331: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.57s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-491794 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-491794 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (229.50073ms)

                                                
                                                
-- stdout --
	* [functional-491794] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:54:47.972258  589475 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:54:47.972462  589475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:54:47.972491  589475 out.go:374] Setting ErrFile to fd 2...
	I0908 12:54:47.972510  589475 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:54:47.972827  589475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
	I0908 12:54:47.973312  589475 out.go:368] Setting JSON to false
	I0908 12:54:47.974369  589475 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9440,"bootTime":1757326648,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 12:54:47.974496  589475 start.go:140] virtualization:  
	I0908 12:54:47.977924  589475 out.go:179] * [functional-491794] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 12:54:47.986537  589475 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 12:54:47.986778  589475 notify.go:220] Checking for updates...
	I0908 12:54:47.992586  589475 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:54:47.995672  589475 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig
	I0908 12:54:47.998695  589475 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube
	I0908 12:54:48.001681  589475 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 12:54:48.004642  589475 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:54:48.008059  589475 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:54:48.008662  589475 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:54:48.044325  589475 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:54:48.044460  589475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:54:48.115538  589475 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 12:54:48.105406761 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:54:48.115651  589475 docker.go:318] overlay module found
	I0908 12:54:48.118705  589475 out.go:179] * Using the docker driver based on existing profile
	I0908 12:54:48.121399  589475 start.go:304] selected driver: docker
	I0908 12:54:48.121418  589475 start.go:918] validating driver "docker" against &{Name:functional-491794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-491794 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:54:48.121530  589475 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:54:48.125005  589475 out.go:203] 
	W0908 12:54:48.127838  589475 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0908 12:54:48.130714  589475 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-491794 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-491794 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-491794 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (205.296761ms)

                                                
                                                
-- stdout --
	* [functional-491794] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:55:02.554109  591144 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:55:02.554253  591144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:55:02.554265  591144 out.go:374] Setting ErrFile to fd 2...
	I0908 12:55:02.554296  591144 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:55:02.556002  591144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
	I0908 12:55:02.556540  591144 out.go:368] Setting JSON to false
	I0908 12:55:02.557597  591144 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9455,"bootTime":1757326648,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 12:55:02.557672  591144 start.go:140] virtualization:  
	I0908 12:55:02.561169  591144 out.go:179] * [functional-491794] minikube v1.36.0 sur Ubuntu 20.04 (arm64)
	I0908 12:55:02.564734  591144 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 12:55:02.564854  591144 notify.go:220] Checking for updates...
	I0908 12:55:02.570215  591144 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 12:55:02.573038  591144 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig
	I0908 12:55:02.575865  591144 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube
	I0908 12:55:02.578615  591144 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 12:55:02.581368  591144 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 12:55:02.584596  591144 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:55:02.585192  591144 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 12:55:02.610304  591144 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 12:55:02.610418  591144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:55:02.667762  591144 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 12:55:02.658634596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:55:02.667877  591144 docker.go:318] overlay module found
	I0908 12:55:02.670875  591144 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0908 12:55:02.673863  591144 start.go:304] selected driver: docker
	I0908 12:55:02.673881  591144 start.go:918] validating driver "docker" against &{Name:functional-491794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.47-1756980985-21488@sha256:8004ef31c95f43ea4d909587f47b84b33af26368a459c00cd53d571affb59c79 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-491794 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0908 12:55:02.673988  591144 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 12:55:02.677643  591144 out.go:203] 
	W0908 12:55:02.680510  591144 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0908 12:55:02.683321  591144 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [ef22f97a-3fe5-4fb2-995f-a22ccf297c95] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003639638s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-491794 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-491794 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-491794 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-491794 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9333851a-5677-4b27-84a1-ee5505a2241a] Pending
helpers_test.go:352: "sp-pod" [9333851a-5677-4b27-84a1-ee5505a2241a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [9333851a-5677-4b27-84a1-ee5505a2241a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004164648s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-491794 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-491794 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-491794 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [59e5d1ed-178d-4d3e-a43e-e9c7d0c802f0] Pending
helpers_test.go:352: "sp-pod" [59e5d1ed-178d-4d3e-a43e-e9c7d0c802f0] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003910148s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-491794 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh -n functional-491794 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 cp functional-491794:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd821293478/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh -n functional-491794 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh -n functional-491794 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/560849/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "sudo cat /etc/test/nested/copy/560849/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/560849.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "sudo cat /etc/ssl/certs/560849.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/560849.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "sudo cat /usr/share/ca-certificates/560849.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5608492.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "sudo cat /etc/ssl/certs/5608492.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5608492.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "sudo cat /usr/share/ca-certificates/5608492.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-491794 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491794 ssh "sudo systemctl is-active docker": exit status 1 (352.133427ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491794 ssh "sudo systemctl is-active containerd": exit status 1 (358.345699ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-491794 version -o=json --components: (1.387274526s)
--- PASS: TestFunctional/parallel/Version/components (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-491794 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-491794
localhost/kicbase/echo-server:functional-491794
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-491794 image ls --format short --alsologtostderr:
I0908 12:55:13.801535  592561 out.go:360] Setting OutFile to fd 1 ...
I0908 12:55:13.801763  592561 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:55:13.801831  592561 out.go:374] Setting ErrFile to fd 2...
I0908 12:55:13.801853  592561 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:55:13.802175  592561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
I0908 12:55:13.802856  592561 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 12:55:13.803061  592561 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 12:55:13.803560  592561 cli_runner.go:164] Run: docker container inspect functional-491794 --format={{.State.Status}}
I0908 12:55:13.836108  592561 ssh_runner.go:195] Run: systemctl --version
I0908 12:55:13.836164  592561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
I0908 12:55:13.861753  592561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33514 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/functional-491794/id_rsa Username:docker}
I0908 12:55:13.952084  592561 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-491794 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ 35f3cbee4fb77 │ 54.3MB │
│ localhost/kicbase/echo-server           │ functional-491794  │ ce2d2cda2d858 │ 4.79MB │
│ localhost/minikube-local-cache-test     │ functional-491794  │ f823ad654ff31 │ 3.33kB │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ 996be7e86d9b3 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ d291939e99406 │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/library/nginx                 │ latest             │ 47ef8710c9f5a │ 202MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ a25f5ef9c34c3 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ 6fc32d66c1411 │ 75.9MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-491794 image ls --format table --alsologtostderr:
I0908 12:55:14.615224  592771 out.go:360] Setting OutFile to fd 1 ...
I0908 12:55:14.615395  592771 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:55:14.615405  592771 out.go:374] Setting ErrFile to fd 2...
I0908 12:55:14.615410  592771 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:55:14.615680  592771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
I0908 12:55:14.616283  592771 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 12:55:14.616407  592771 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 12:55:14.616842  592771 cli_runner.go:164] Run: docker container inspect functional-491794 --format={{.State.Status}}
I0908 12:55:14.636548  592771 ssh_runner.go:195] Run: systemctl --version
I0908 12:55:14.636605  592771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
I0908 12:55:14.655257  592771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33514 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/functional-491794/id_rsa Username:docker}
I0908 12:55:14.750349  592771 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-491794 image ls --format json --alsologtostderr:
[{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54348302"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-491794"],"size":"4788229"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbd
f85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"75938711"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":["registry.k
8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"72629077"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae
606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"47ef8710c9f5a9276b3e347e3ab71ee44c8483e20f8636380ae2737ef4c27758","repoDigests":["docker.io/library/nginx@sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708","docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57"],"repoTags":["docker.io/library/nginx:latest"],"size":"202036629"},{"id":"f823ad654ff31477556227b679fb3e1927230ce6434b7f6bcfaee907fb26364c","repoDigests":["loc
alhost/minikube-local-cache-test@sha256:d73f14c282d3ca7924b9e2d9c50399b4c24c545cc24843f0d149310ccd7ce73c"],"repoTags":["localhost/minikube-local-cache-test:functional-491794"],"size":"3330"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.
io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":["registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"84818927"},{"id":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":["registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"51592021"},{"id":"
8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-491794 image ls --format json --alsologtostderr:
I0908 12:55:14.357731  592703 out.go:360] Setting OutFile to fd 1 ...
I0908 12:55:14.357905  592703 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:55:14.357931  592703 out.go:374] Setting ErrFile to fd 2...
I0908 12:55:14.357951  592703 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:55:14.358244  592703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
I0908 12:55:14.358974  592703 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 12:55:14.359206  592703 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 12:55:14.359788  592703 cli_runner.go:164] Run: docker container inspect functional-491794 --format={{.State.Status}}
I0908 12:55:14.379640  592703 ssh_runner.go:195] Run: systemctl --version
I0908 12:55:14.379696  592703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
I0908 12:55:14.403981  592703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33514 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/functional-491794/id_rsa Username:docker}
I0908 12:55:14.499986  592703 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-491794 image ls --format yaml --alsologtostderr:
- id: d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:ef0790d885e5e46cad864b09351a201eb54f01ea5755de1c3a53a7d90ea1286f
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "84818927"
- id: a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:03bf1b9fae1536dc052874c2943f6c9c16410bf65e88e042109d7edc0e574422
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "51592021"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:02610466f968b6af36e9e76aee7a1d52f922fba4ec5cdb7a5423137d726f0da5
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "72629077"
- id: 6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:49d0c22a7e97772329396bf30c435c70c6ad77f527040f4334b88076500e883e
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "75938711"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac
repoTags:
- docker.io/library/nginx:alpine
size: "54348302"
- id: 47ef8710c9f5a9276b3e347e3ab71ee44c8483e20f8636380ae2737ef4c27758
repoDigests:
- docker.io/library/nginx@sha256:1e297dbd6dd3441f54fbeeef6be4688f257a85580b21940d18c2c11f9ce6a708
- docker.io/library/nginx@sha256:33e0bbc7ca9ecf108140af6288c7c9d1ecc77548cbfd3952fd8466a75edefe57
repoTags:
- docker.io/library/nginx:latest
size: "202036629"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-491794
size: "4788229"
- id: f823ad654ff31477556227b679fb3e1927230ce6434b7f6bcfaee907fb26364c
repoDigests:
- localhost/minikube-local-cache-test@sha256:d73f14c282d3ca7924b9e2d9c50399b4c24c545cc24843f0d149310ccd7ce73c
repoTags:
- localhost/minikube-local-cache-test:functional-491794
size: "3330"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-491794 image ls --format yaml --alsologtostderr:
I0908 12:55:14.072950  592645 out.go:360] Setting OutFile to fd 1 ...
I0908 12:55:14.073149  592645 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:55:14.073163  592645 out.go:374] Setting ErrFile to fd 2...
I0908 12:55:14.073169  592645 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:55:14.073487  592645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
I0908 12:55:14.074348  592645 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 12:55:14.074530  592645 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 12:55:14.075074  592645 cli_runner.go:164] Run: docker container inspect functional-491794 --format={{.State.Status}}
I0908 12:55:14.102906  592645 ssh_runner.go:195] Run: systemctl --version
I0908 12:55:14.102973  592645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
I0908 12:55:14.124101  592645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33514 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/functional-491794/id_rsa Username:docker}
I0908 12:55:14.218839  592645 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491794 ssh pgrep buildkitd: exit status 1 (345.877477ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image build -t localhost/my-image:functional-491794 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-491794 image build -t localhost/my-image:functional-491794 testdata/build --alsologtostderr: (3.474588777s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-491794 image build -t localhost/my-image:functional-491794 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f72bf05d918
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-491794
--> a3b5c8fc06c
Successfully tagged localhost/my-image:functional-491794
a3b5c8fc06c3a71a47af9da96efb4702ac00fc40233570bc0779c7f464f13679
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-491794 image build -t localhost/my-image:functional-491794 testdata/build --alsologtostderr:
I0908 12:55:14.167330  592659 out.go:360] Setting OutFile to fd 1 ...
I0908 12:55:14.169691  592659 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:55:14.169714  592659 out.go:374] Setting ErrFile to fd 2...
I0908 12:55:14.169721  592659 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0908 12:55:14.170138  592659 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
I0908 12:55:14.171199  592659 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 12:55:14.172858  592659 config.go:182] Loaded profile config "functional-491794": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0908 12:55:14.173489  592659 cli_runner.go:164] Run: docker container inspect functional-491794 --format={{.State.Status}}
I0908 12:55:14.191525  592659 ssh_runner.go:195] Run: systemctl --version
I0908 12:55:14.191573  592659 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-491794
I0908 12:55:14.209027  592659 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33514 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/functional-491794/id_rsa Username:docker}
I0908 12:55:14.298407  592659 build_images.go:161] Building image from path: /tmp/build.2381551210.tar
I0908 12:55:14.298484  592659 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0908 12:55:14.308646  592659 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2381551210.tar
I0908 12:55:14.312428  592659 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2381551210.tar: stat -c "%s %y" /var/lib/minikube/build/build.2381551210.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2381551210.tar': No such file or directory
I0908 12:55:14.312462  592659 ssh_runner.go:362] scp /tmp/build.2381551210.tar --> /var/lib/minikube/build/build.2381551210.tar (3072 bytes)
I0908 12:55:14.345792  592659 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2381551210
I0908 12:55:14.355818  592659 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2381551210 -xf /var/lib/minikube/build/build.2381551210.tar
I0908 12:55:14.367027  592659 crio.go:315] Building image: /var/lib/minikube/build/build.2381551210
I0908 12:55:14.367096  592659 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-491794 /var/lib/minikube/build/build.2381551210 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0908 12:55:17.536913  592659 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-491794 /var/lib/minikube/build/build.2381551210 --cgroup-manager=cgroupfs: (3.169793866s)
I0908 12:55:17.536992  592659 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2381551210
I0908 12:55:17.546841  592659 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2381551210.tar
I0908 12:55:17.555942  592659 build_images.go:217] Built localhost/my-image:functional-491794 from /tmp/build.2381551210.tar
I0908 12:55:17.555977  592659 build_images.go:133] succeeded building to: functional-491794
I0908 12:55:17.555983  592659 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-491794
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image load --daemon kicbase/echo-server:functional-491794 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-491794 image load --daemon kicbase/echo-server:functional-491794 --alsologtostderr: (1.321177362s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image load --daemon kicbase/echo-server:functional-491794 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-491794
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image load --daemon kicbase/echo-server:functional-491794 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "480.62493ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "82.31601ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "425.900346ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "93.14464ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image save kicbase/echo-server:functional-491794 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-491794 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-491794 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-491794 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-491794 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 587826: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image rm kicbase/echo-server:functional-491794 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-491794 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-491794 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [dd60550b-4277-4203-acd1-5880263a9626] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [dd60550b-4277-4203-acd1-5880263a9626] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.005902613s
I0908 12:44:43.448876  560849 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-491794
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 image save --daemon kicbase/echo-server:functional-491794 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-491794
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-491794 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.253.45 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-491794 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-491794 /tmp/TestFunctionalparallelMountCmdany-port1636855626/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1757336088380946544" to /tmp/TestFunctionalparallelMountCmdany-port1636855626/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1757336088380946544" to /tmp/TestFunctionalparallelMountCmdany-port1636855626/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1757336088380946544" to /tmp/TestFunctionalparallelMountCmdany-port1636855626/001/test-1757336088380946544
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491794 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (341.388496ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 12:54:48.723422  560849 retry.go:31] will retry after 492.708391ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  8 12:54 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  8 12:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  8 12:54 test-1757336088380946544
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh cat /mount-9p/test-1757336088380946544
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-491794 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [c8ad3f53-6be4-4375-ada2-36f67e1455d8] Pending
helpers_test.go:352: "busybox-mount" [c8ad3f53-6be4-4375-ada2-36f67e1455d8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [c8ad3f53-6be4-4375-ada2-36f67e1455d8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [c8ad3f53-6be4-4375-ada2-36f67e1455d8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003459394s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-491794 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-491794 /tmp/TestFunctionalparallelMountCmdany-port1636855626/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-491794 /tmp/TestFunctionalparallelMountCmdspecific-port4059448240/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491794 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (324.77422ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 12:54:57.533557  560849 retry.go:31] will retry after 359.675494ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-491794 /tmp/TestFunctionalparallelMountCmdspecific-port4059448240/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491794 ssh "sudo umount -f /mount-9p": exit status 1 (272.734866ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-491794 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-491794 /tmp/TestFunctionalparallelMountCmdspecific-port4059448240/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-491794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3889763955/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-491794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3889763955/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-491794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3889763955/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-491794 ssh "findmnt -T" /mount1: exit status 1 (598.286963ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0908 12:54:59.542277  560849 retry.go:31] will retry after 600.540277ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-491794 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-491794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3889763955/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-491794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3889763955/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-491794 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3889763955/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-491794 service list: (1.344244221s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-491794 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-491794 service list -o json: (1.404332587s)
functional_test.go:1504: Took "1.404413678s" to run "out/minikube-linux-arm64 -p functional-491794 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-491794
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-491794
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-491794
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0908 12:56:29.298497  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:57:52.375061  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-134273 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m15.488492019s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (196.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-134273 kubectl -- rollout status deployment/busybox: (5.956799635s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-5kgdx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-hvmfm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-slg7b -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-5kgdx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-hvmfm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-slg7b -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-5kgdx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-hvmfm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-slg7b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-5kgdx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-5kgdx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-hvmfm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-hvmfm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-slg7b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 kubectl -- exec busybox-7b57f96db7-slg7b -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (32.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-134273 node add --alsologtostderr -v 5: (31.854927003s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (32.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-134273 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.007886705s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp testdata/cp-test.txt ha-134273:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2898099127/001/cp-test_ha-134273.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273:/home/docker/cp-test.txt ha-134273-m02:/home/docker/cp-test_ha-134273_ha-134273-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m02 "sudo cat /home/docker/cp-test_ha-134273_ha-134273-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273:/home/docker/cp-test.txt ha-134273-m03:/home/docker/cp-test_ha-134273_ha-134273-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m03 "sudo cat /home/docker/cp-test_ha-134273_ha-134273-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273:/home/docker/cp-test.txt ha-134273-m04:/home/docker/cp-test_ha-134273_ha-134273-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m04 "sudo cat /home/docker/cp-test_ha-134273_ha-134273-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp testdata/cp-test.txt ha-134273-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2898099127/001/cp-test_ha-134273-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273-m02:/home/docker/cp-test.txt ha-134273:/home/docker/cp-test_ha-134273-m02_ha-134273.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273 "sudo cat /home/docker/cp-test_ha-134273-m02_ha-134273.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273-m02:/home/docker/cp-test.txt ha-134273-m03:/home/docker/cp-test_ha-134273-m02_ha-134273-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m03 "sudo cat /home/docker/cp-test_ha-134273-m02_ha-134273-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273-m02:/home/docker/cp-test.txt ha-134273-m04:/home/docker/cp-test_ha-134273-m02_ha-134273-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m04 "sudo cat /home/docker/cp-test_ha-134273-m02_ha-134273-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp testdata/cp-test.txt ha-134273-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2898099127/001/cp-test_ha-134273-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273-m03:/home/docker/cp-test.txt ha-134273:/home/docker/cp-test_ha-134273-m03_ha-134273.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273 "sudo cat /home/docker/cp-test_ha-134273-m03_ha-134273.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273-m03:/home/docker/cp-test.txt ha-134273-m02:/home/docker/cp-test_ha-134273-m03_ha-134273-m02.txt
E0908 12:59:34.966317  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:59:34.974011  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:59:34.985947  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:59:35.009637  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:59:35.050981  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 12:59:35.132348  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m03 "sudo cat /home/docker/cp-test.txt"
E0908 12:59:35.294222  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m02 "sudo cat /home/docker/cp-test_ha-134273-m03_ha-134273-m02.txt"
E0908 12:59:35.615583  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273-m03:/home/docker/cp-test.txt ha-134273-m04:/home/docker/cp-test_ha-134273-m03_ha-134273-m04.txt
E0908 12:59:36.257368  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m04 "sudo cat /home/docker/cp-test_ha-134273-m03_ha-134273-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp testdata/cp-test.txt ha-134273-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m04 "sudo cat /home/docker/cp-test.txt"
E0908 12:59:37.542575  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2898099127/001/cp-test_ha-134273-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273-m04:/home/docker/cp-test.txt ha-134273:/home/docker/cp-test_ha-134273-m04_ha-134273.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273 "sudo cat /home/docker/cp-test_ha-134273-m04_ha-134273.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273-m04:/home/docker/cp-test.txt ha-134273-m02:/home/docker/cp-test_ha-134273-m04_ha-134273-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m04 "sudo cat /home/docker/cp-test.txt"
E0908 12:59:40.104671  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m02 "sudo cat /home/docker/cp-test_ha-134273-m04_ha-134273-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 cp ha-134273-m04:/home/docker/cp-test.txt ha-134273-m03:/home/docker/cp-test_ha-134273-m04_ha-134273-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 ssh -n ha-134273-m03 "sudo cat /home/docker/cp-test_ha-134273-m04_ha-134273-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 node stop m02 --alsologtostderr -v 5
E0908 12:59:45.226807  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-134273 node stop m02 --alsologtostderr -v 5: (11.94017975s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-134273 status --alsologtostderr -v 5: exit status 7 (776.919477ms)

                                                
                                                
-- stdout --
	ha-134273
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-134273-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-134273-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-134273-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 12:59:53.508022  608600 out.go:360] Setting OutFile to fd 1 ...
	I0908 12:59:53.508191  608600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:59:53.508221  608600 out.go:374] Setting ErrFile to fd 2...
	I0908 12:59:53.508243  608600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 12:59:53.508542  608600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
	I0908 12:59:53.508764  608600 out.go:368] Setting JSON to false
	I0908 12:59:53.508835  608600 mustload.go:65] Loading cluster: ha-134273
	I0908 12:59:53.508912  608600 notify.go:220] Checking for updates...
	I0908 12:59:53.509282  608600 config.go:182] Loaded profile config "ha-134273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 12:59:53.509367  608600 status.go:174] checking status of ha-134273 ...
	I0908 12:59:53.510295  608600 cli_runner.go:164] Run: docker container inspect ha-134273 --format={{.State.Status}}
	I0908 12:59:53.531239  608600 status.go:371] ha-134273 host status = "Running" (err=<nil>)
	I0908 12:59:53.531264  608600 host.go:66] Checking if "ha-134273" exists ...
	I0908 12:59:53.531615  608600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-134273
	I0908 12:59:53.562879  608600 host.go:66] Checking if "ha-134273" exists ...
	I0908 12:59:53.563175  608600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:59:53.563220  608600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-134273
	I0908 12:59:53.582354  608600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/ha-134273/id_rsa Username:docker}
	I0908 12:59:53.675303  608600 ssh_runner.go:195] Run: systemctl --version
	I0908 12:59:53.679866  608600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:59:53.692297  608600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 12:59:53.771244  608600 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-08 12:59:53.759915292 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 12:59:53.771834  608600 kubeconfig.go:125] found "ha-134273" server: "https://192.168.49.254:8443"
	I0908 12:59:53.771877  608600 api_server.go:166] Checking apiserver status ...
	I0908 12:59:53.771933  608600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:59:53.783300  608600 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1428/cgroup
	I0908 12:59:53.793471  608600 api_server.go:182] apiserver freezer: "6:freezer:/docker/d9524b8f301a697b3a9241a77c8c7216889a6fd8f2a22a877f03525830f50164/crio/crio-23387e730b9bc647227654e8c4e3696b2dfe86b4deb5bf7f58fa66f82c240d41"
	I0908 12:59:53.793538  608600 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d9524b8f301a697b3a9241a77c8c7216889a6fd8f2a22a877f03525830f50164/crio/crio-23387e730b9bc647227654e8c4e3696b2dfe86b4deb5bf7f58fa66f82c240d41/freezer.state
	I0908 12:59:53.805642  608600 api_server.go:204] freezer state: "THAWED"
	I0908 12:59:53.805672  608600 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 12:59:53.814342  608600 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 12:59:53.814374  608600 status.go:463] ha-134273 apiserver status = Running (err=<nil>)
	I0908 12:59:53.814386  608600 status.go:176] ha-134273 status: &{Name:ha-134273 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:59:53.814404  608600 status.go:174] checking status of ha-134273-m02 ...
	I0908 12:59:53.814710  608600 cli_runner.go:164] Run: docker container inspect ha-134273-m02 --format={{.State.Status}}
	I0908 12:59:53.840351  608600 status.go:371] ha-134273-m02 host status = "Stopped" (err=<nil>)
	I0908 12:59:53.840374  608600 status.go:384] host is not running, skipping remaining checks
	I0908 12:59:53.840381  608600 status.go:176] ha-134273-m02 status: &{Name:ha-134273-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:59:53.840401  608600 status.go:174] checking status of ha-134273-m03 ...
	I0908 12:59:53.840716  608600 cli_runner.go:164] Run: docker container inspect ha-134273-m03 --format={{.State.Status}}
	I0908 12:59:53.857525  608600 status.go:371] ha-134273-m03 host status = "Running" (err=<nil>)
	I0908 12:59:53.857549  608600 host.go:66] Checking if "ha-134273-m03" exists ...
	I0908 12:59:53.857905  608600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-134273-m03
	I0908 12:59:53.875863  608600 host.go:66] Checking if "ha-134273-m03" exists ...
	I0908 12:59:53.876242  608600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:59:53.877078  608600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-134273-m03
	I0908 12:59:53.894328  608600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33529 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/ha-134273-m03/id_rsa Username:docker}
	I0908 12:59:53.986915  608600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:59:54.003194  608600 kubeconfig.go:125] found "ha-134273" server: "https://192.168.49.254:8443"
	I0908 12:59:54.003235  608600 api_server.go:166] Checking apiserver status ...
	I0908 12:59:54.003277  608600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 12:59:54.019597  608600 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1398/cgroup
	I0908 12:59:54.035048  608600 api_server.go:182] apiserver freezer: "6:freezer:/docker/458ffcc14555b3ebb927c349fd5001b699a21c43552151378633a31f2d5b18d5/crio/crio-f76aba80d5f6e6353d223e49dec2693c298091d7487682a65122fa264a4baac6"
	I0908 12:59:54.035144  608600 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/458ffcc14555b3ebb927c349fd5001b699a21c43552151378633a31f2d5b18d5/crio/crio-f76aba80d5f6e6353d223e49dec2693c298091d7487682a65122fa264a4baac6/freezer.state
	I0908 12:59:54.045404  608600 api_server.go:204] freezer state: "THAWED"
	I0908 12:59:54.045436  608600 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0908 12:59:54.054134  608600 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0908 12:59:54.054163  608600 status.go:463] ha-134273-m03 apiserver status = Running (err=<nil>)
	I0908 12:59:54.054174  608600 status.go:176] ha-134273-m03 status: &{Name:ha-134273-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 12:59:54.054190  608600 status.go:174] checking status of ha-134273-m04 ...
	I0908 12:59:54.054518  608600 cli_runner.go:164] Run: docker container inspect ha-134273-m04 --format={{.State.Status}}
	I0908 12:59:54.073252  608600 status.go:371] ha-134273-m04 host status = "Running" (err=<nil>)
	I0908 12:59:54.073281  608600 host.go:66] Checking if "ha-134273-m04" exists ...
	I0908 12:59:54.073675  608600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-134273-m04
	I0908 12:59:54.096105  608600 host.go:66] Checking if "ha-134273-m04" exists ...
	I0908 12:59:54.096433  608600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 12:59:54.096483  608600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-134273-m04
	I0908 12:59:54.116313  608600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/ha-134273-m04/id_rsa Username:docker}
	I0908 12:59:54.208160  608600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 12:59:54.221944  608600 status.go:176] ha-134273-m04 status: &{Name:ha-134273-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 node start m02 --alsologtostderr -v 5
E0908 12:59:55.469197  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:00:15.951311  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-134273 node start m02 --alsologtostderr -v 5: (32.906705946s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-134273 status --alsologtostderr -v 5: (1.157169012s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.304586819s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (116.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 stop --alsologtostderr -v 5
E0908 13:00:56.914082  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-134273 stop --alsologtostderr -v 5: (27.01946747s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 start --wait true --alsologtostderr -v 5
E0908 13:01:29.299071  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:02:18.835782  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-134273 start --wait true --alsologtostderr -v 5: (1m29.520750016s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (116.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-134273 node delete m03 --alsologtostderr -v 5: (12.028963172s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (13.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-134273 stop --alsologtostderr -v 5: (35.81278516s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-134273 status --alsologtostderr -v 5: exit status 7 (119.453191ms)

                                                
                                                
-- stdout --
	ha-134273
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-134273-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-134273-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:03:16.887028  622490 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:03:16.887142  622490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:03:16.887148  622490 out.go:374] Setting ErrFile to fd 2...
	I0908 13:03:16.887152  622490 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:03:16.887499  622490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
	I0908 13:03:16.887744  622490 out.go:368] Setting JSON to false
	I0908 13:03:16.887776  622490 mustload.go:65] Loading cluster: ha-134273
	I0908 13:03:16.888436  622490 config.go:182] Loaded profile config "ha-134273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:03:16.888457  622490 status.go:174] checking status of ha-134273 ...
	I0908 13:03:16.889432  622490 notify.go:220] Checking for updates...
	I0908 13:03:16.889541  622490 cli_runner.go:164] Run: docker container inspect ha-134273 --format={{.State.Status}}
	I0908 13:03:16.907814  622490 status.go:371] ha-134273 host status = "Stopped" (err=<nil>)
	I0908 13:03:16.907839  622490 status.go:384] host is not running, skipping remaining checks
	I0908 13:03:16.907846  622490 status.go:176] ha-134273 status: &{Name:ha-134273 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:03:16.907885  622490 status.go:174] checking status of ha-134273-m02 ...
	I0908 13:03:16.908186  622490 cli_runner.go:164] Run: docker container inspect ha-134273-m02 --format={{.State.Status}}
	I0908 13:03:16.935362  622490 status.go:371] ha-134273-m02 host status = "Stopped" (err=<nil>)
	I0908 13:03:16.935386  622490 status.go:384] host is not running, skipping remaining checks
	I0908 13:03:16.935393  622490 status.go:176] ha-134273-m02 status: &{Name:ha-134273-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:03:16.935427  622490 status.go:174] checking status of ha-134273-m04 ...
	I0908 13:03:16.935742  622490 cli_runner.go:164] Run: docker container inspect ha-134273-m04 --format={{.State.Status}}
	I0908 13:03:16.954079  622490 status.go:371] ha-134273-m04 host status = "Stopped" (err=<nil>)
	I0908 13:03:16.954102  622490 status.go:384] host is not running, skipping remaining checks
	I0908 13:03:16.954110  622490 status.go:176] ha-134273-m04 status: &{Name:ha-134273-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (87.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0908 13:04:34.967391  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-134273 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m26.593178615s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (87.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (63.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 node add --control-plane --alsologtostderr -v 5
E0908 13:05:02.677956  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-134273 node add --control-plane --alsologtostderr -v 5: (1m2.763217629s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-134273 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-134273 status --alsologtostderr -v 5: (1.011533747s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (63.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.027795439s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-212326 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E0908 13:06:29.299052  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-212326 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.003417449s)
--- PASS: TestJSONOutput/start/Command (79.01s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-212326 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-212326 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-212326 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-212326 --output=json --user=testUser: (5.836554039s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-511573 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-511573 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (130.699729ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2976bca0-12ed-41a2-a8f2-e149b132a2db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-511573] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aba78403-628f-41a4-9b11-ff312c1281a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21508"}}
	{"specversion":"1.0","id":"876efb4d-fc62-4e13-be61-255d66c461be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e2b36a23-7ca7-4da3-9433-eedcc24d73cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig"}}
	{"specversion":"1.0","id":"f8c39aec-779c-4b38-8fe4-6275954fbe92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube"}}
	{"specversion":"1.0","id":"da63191b-1e23-4fe1-af29-1573a930b997","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"419533f6-c706-4ade-bd37-03a6e151634d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"398ac63e-dacc-4d06-9ca5-b4c0f8698916","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-511573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-511573
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-707311 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-707311 --network=: (40.381267896s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-707311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-707311
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-707311: (2.081448664s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.49s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.85s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-656543 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-656543 --network=bridge: (33.823630562s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-656543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-656543
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-656543: (2.002293528s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.85s)

                                                
                                    
x
+
TestKicExistingNetwork (31.01s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0908 13:08:48.612940  560849 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0908 13:08:48.629745  560849 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0908 13:08:48.629855  560849 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0908 13:08:48.629878  560849 cli_runner.go:164] Run: docker network inspect existing-network
W0908 13:08:48.645993  560849 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0908 13:08:48.646024  560849 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0908 13:08:48.646039  560849 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0908 13:08:48.646141  560849 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0908 13:08:48.663705  560849 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2b8519f63797 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:fe:5c:46:10:2d} reservation:<nil>}
I0908 13:08:48.664001  560849 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001685d90}
I0908 13:08:48.664024  560849 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0908 13:08:48.664078  560849 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0908 13:08:48.724795  560849 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-820906 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-820906 --network=existing-network: (28.808469818s)
helpers_test.go:175: Cleaning up "existing-network-820906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-820906
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-820906: (2.057509395s)
I0908 13:09:19.608069  560849 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.01s)

                                                
                                    
x
+
TestKicCustomSubnet (34.8s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-165990 --subnet=192.168.60.0/24
E0908 13:09:34.970081  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-165990 --subnet=192.168.60.0/24: (32.565276062s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-165990 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-165990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-165990
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-165990: (2.213663137s)
--- PASS: TestKicCustomSubnet (34.80s)

                                                
                                    
x
+
TestKicStaticIP (34.23s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-441340 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-441340 --static-ip=192.168.200.200: (32.003717582s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-441340 ip
helpers_test.go:175: Cleaning up "static-ip-441340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-441340
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-441340: (2.069153389s)
--- PASS: TestKicStaticIP (34.23s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-237900 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-237900 --driver=docker  --container-runtime=crio: (33.706781002s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-240607 --driver=docker  --container-runtime=crio
E0908 13:11:29.300196  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-240607 --driver=docker  --container-runtime=crio: (35.764339673s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-237900
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-240607
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-240607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-240607
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-240607: (2.009939801s)
helpers_test.go:175: Cleaning up "first-237900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-237900
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-237900: (1.963858741s)
--- PASS: TestMinikubeProfile (74.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-698177 --memory=3072 --mount-string /tmp/TestMountStartserial2069725601/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-698177 --memory=3072 --mount-string /tmp/TestMountStartserial2069725601/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.788770645s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-698177 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-700409 --memory=3072 --mount-string /tmp/TestMountStartserial2069725601/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-700409 --memory=3072 --mount-string /tmp/TestMountStartserial2069725601/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.048253223s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-700409 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-698177 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-698177 --alsologtostderr -v=5: (1.623614836s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-700409 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-700409
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-700409: (1.276688476s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.55s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-700409
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-700409: (7.548941272s)
--- PASS: TestMountStart/serial/RestartStopped (8.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-700409 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (138.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-054430 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-054430 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m17.591762819s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (138.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- rollout status deployment/busybox
E0908 13:14:32.381465  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-054430 -- rollout status deployment/busybox: (4.686215308s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- exec busybox-7b57f96db7-69j2l -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- exec busybox-7b57f96db7-wbjnc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- exec busybox-7b57f96db7-69j2l -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- exec busybox-7b57f96db7-wbjnc -- nslookup kubernetes.default
E0908 13:14:34.966588  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- exec busybox-7b57f96db7-69j2l -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- exec busybox-7b57f96db7-wbjnc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- exec busybox-7b57f96db7-69j2l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- exec busybox-7b57f96db7-69j2l -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- exec busybox-7b57f96db7-wbjnc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-054430 -- exec busybox-7b57f96db7-wbjnc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-054430 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-054430 -v=5 --alsologtostderr: (56.634018484s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-054430 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 cp testdata/cp-test.txt multinode-054430:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 cp multinode-054430:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1253471911/001/cp-test_multinode-054430.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 cp multinode-054430:/home/docker/cp-test.txt multinode-054430-m02:/home/docker/cp-test_multinode-054430_multinode-054430-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430-m02 "sudo cat /home/docker/cp-test_multinode-054430_multinode-054430-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 cp multinode-054430:/home/docker/cp-test.txt multinode-054430-m03:/home/docker/cp-test_multinode-054430_multinode-054430-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430-m03 "sudo cat /home/docker/cp-test_multinode-054430_multinode-054430-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 cp testdata/cp-test.txt multinode-054430-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 cp multinode-054430-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1253471911/001/cp-test_multinode-054430-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 cp multinode-054430-m02:/home/docker/cp-test.txt multinode-054430:/home/docker/cp-test_multinode-054430-m02_multinode-054430.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430 "sudo cat /home/docker/cp-test_multinode-054430-m02_multinode-054430.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 cp multinode-054430-m02:/home/docker/cp-test.txt multinode-054430-m03:/home/docker/cp-test_multinode-054430-m02_multinode-054430-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430-m03 "sudo cat /home/docker/cp-test_multinode-054430-m02_multinode-054430-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 cp testdata/cp-test.txt multinode-054430-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 cp multinode-054430-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1253471911/001/cp-test_multinode-054430-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 cp multinode-054430-m03:/home/docker/cp-test.txt multinode-054430:/home/docker/cp-test_multinode-054430-m03_multinode-054430.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430 "sudo cat /home/docker/cp-test_multinode-054430-m03_multinode-054430.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 cp multinode-054430-m03:/home/docker/cp-test.txt multinode-054430-m02:/home/docker/cp-test_multinode-054430-m03_multinode-054430-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 ssh -n multinode-054430-m02 "sudo cat /home/docker/cp-test_multinode-054430-m03_multinode-054430-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-054430 node stop m03: (1.218346863s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-054430 status: exit status 7 (514.294069ms)

                                                
                                                
-- stdout --
	multinode-054430
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-054430-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-054430-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-054430 status --alsologtostderr: exit status 7 (537.18497ms)

                                                
                                                
-- stdout --
	multinode-054430
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-054430-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-054430-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:15:46.556513  675622 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:15:46.556690  675622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:15:46.556721  675622 out.go:374] Setting ErrFile to fd 2...
	I0908 13:15:46.556747  675622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:15:46.557031  675622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
	I0908 13:15:46.557259  675622 out.go:368] Setting JSON to false
	I0908 13:15:46.557339  675622 mustload.go:65] Loading cluster: multinode-054430
	I0908 13:15:46.557408  675622 notify.go:220] Checking for updates...
	I0908 13:15:46.557921  675622 config.go:182] Loaded profile config "multinode-054430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:15:46.557974  675622 status.go:174] checking status of multinode-054430 ...
	I0908 13:15:46.558535  675622 cli_runner.go:164] Run: docker container inspect multinode-054430 --format={{.State.Status}}
	I0908 13:15:46.582417  675622 status.go:371] multinode-054430 host status = "Running" (err=<nil>)
	I0908 13:15:46.582446  675622 host.go:66] Checking if "multinode-054430" exists ...
	I0908 13:15:46.582784  675622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-054430
	I0908 13:15:46.617326  675622 host.go:66] Checking if "multinode-054430" exists ...
	I0908 13:15:46.617700  675622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:15:46.617765  675622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-054430
	I0908 13:15:46.637859  675622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33640 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/multinode-054430/id_rsa Username:docker}
	I0908 13:15:46.727464  675622 ssh_runner.go:195] Run: systemctl --version
	I0908 13:15:46.732049  675622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:15:46.745592  675622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:15:46.810644  675622 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-08 13:15:46.800562386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:15:46.811258  675622 kubeconfig.go:125] found "multinode-054430" server: "https://192.168.67.2:8443"
	I0908 13:15:46.811296  675622 api_server.go:166] Checking apiserver status ...
	I0908 13:15:46.811370  675622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0908 13:15:46.822676  675622 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1414/cgroup
	I0908 13:15:46.832238  675622 api_server.go:182] apiserver freezer: "6:freezer:/docker/400c94e0887693845c438f2184a8f277fc520ea653d3a24321f7ef54a26e823e/crio/crio-388cf5b761ff17d08e74ba51b6ad4ebad0e5abdcd9e94c17fc7f9f3e044f8035"
	I0908 13:15:46.832319  675622 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/400c94e0887693845c438f2184a8f277fc520ea653d3a24321f7ef54a26e823e/crio/crio-388cf5b761ff17d08e74ba51b6ad4ebad0e5abdcd9e94c17fc7f9f3e044f8035/freezer.state
	I0908 13:15:46.841206  675622 api_server.go:204] freezer state: "THAWED"
	I0908 13:15:46.841236  675622 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0908 13:15:46.850672  675622 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0908 13:15:46.850705  675622 status.go:463] multinode-054430 apiserver status = Running (err=<nil>)
	I0908 13:15:46.850717  675622 status.go:176] multinode-054430 status: &{Name:multinode-054430 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:15:46.850734  675622 status.go:174] checking status of multinode-054430-m02 ...
	I0908 13:15:46.851042  675622 cli_runner.go:164] Run: docker container inspect multinode-054430-m02 --format={{.State.Status}}
	I0908 13:15:46.870726  675622 status.go:371] multinode-054430-m02 host status = "Running" (err=<nil>)
	I0908 13:15:46.870754  675622 host.go:66] Checking if "multinode-054430-m02" exists ...
	I0908 13:15:46.871085  675622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-054430-m02
	I0908 13:15:46.890315  675622 host.go:66] Checking if "multinode-054430-m02" exists ...
	I0908 13:15:46.890642  675622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0908 13:15:46.890681  675622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-054430-m02
	I0908 13:15:46.908317  675622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33645 SSHKeyPath:/home/jenkins/minikube-integration/21508-558996/.minikube/machines/multinode-054430-m02/id_rsa Username:docker}
	I0908 13:15:46.998669  675622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0908 13:15:47.011513  675622 status.go:176] multinode-054430-m02 status: &{Name:multinode-054430-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:15:47.011555  675622 status.go:174] checking status of multinode-054430-m03 ...
	I0908 13:15:47.011860  675622 cli_runner.go:164] Run: docker container inspect multinode-054430-m03 --format={{.State.Status}}
	I0908 13:15:47.028937  675622 status.go:371] multinode-054430-m03 host status = "Stopped" (err=<nil>)
	I0908 13:15:47.028960  675622 status.go:384] host is not running, skipping remaining checks
	I0908 13:15:47.028968  675622 status.go:176] multinode-054430-m03 status: &{Name:multinode-054430-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-054430 node start m03 -v=5 --alsologtostderr: (7.044289862s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-054430
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-054430
E0908 13:15:58.040984  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-054430: (24.836848735s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-054430 --wait=true -v=5 --alsologtostderr
E0908 13:16:29.298803  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-054430 --wait=true -v=5 --alsologtostderr: (53.780567266s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-054430
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-054430 node delete m03: (4.931728111s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-054430 stop: (23.656639462s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-054430 status: exit status 7 (95.386094ms)

                                                
                                                
-- stdout --
	multinode-054430
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-054430-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-054430 status --alsologtostderr: exit status 7 (103.912187ms)

                                                
                                                
-- stdout --
	multinode-054430
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-054430-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:17:43.011408  683529 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:17:43.011617  683529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:17:43.011650  683529 out.go:374] Setting ErrFile to fd 2...
	I0908 13:17:43.011672  683529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:17:43.012600  683529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
	I0908 13:17:43.012857  683529 out.go:368] Setting JSON to false
	I0908 13:17:43.012907  683529 mustload.go:65] Loading cluster: multinode-054430
	I0908 13:17:43.013343  683529 config.go:182] Loaded profile config "multinode-054430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:17:43.013371  683529 status.go:174] checking status of multinode-054430 ...
	I0908 13:17:43.013957  683529 cli_runner.go:164] Run: docker container inspect multinode-054430 --format={{.State.Status}}
	I0908 13:17:43.014255  683529 notify.go:220] Checking for updates...
	I0908 13:17:43.032911  683529 status.go:371] multinode-054430 host status = "Stopped" (err=<nil>)
	I0908 13:17:43.032937  683529 status.go:384] host is not running, skipping remaining checks
	I0908 13:17:43.032945  683529 status.go:176] multinode-054430 status: &{Name:multinode-054430 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0908 13:17:43.032989  683529 status.go:174] checking status of multinode-054430-m02 ...
	I0908 13:17:43.033309  683529 cli_runner.go:164] Run: docker container inspect multinode-054430-m02 --format={{.State.Status}}
	I0908 13:17:43.057217  683529 status.go:371] multinode-054430-m02 host status = "Stopped" (err=<nil>)
	I0908 13:17:43.057242  683529 status.go:384] host is not running, skipping remaining checks
	I0908 13:17:43.057249  683529 status.go:176] multinode-054430-m02 status: &{Name:multinode-054430-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-054430 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-054430 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (53.699570104s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-054430 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.38s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-054430
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-054430-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-054430-m02 --driver=docker  --container-runtime=crio: exit status 14 (94.23431ms)

                                                
                                                
-- stdout --
	* [multinode-054430-m02] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-054430-m02' is duplicated with machine name 'multinode-054430-m02' in profile 'multinode-054430'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-054430-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-054430-m03 --driver=docker  --container-runtime=crio: (32.579349965s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-054430
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-054430: exit status 80 (351.607604ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-054430 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-054430-m03 already exists in multinode-054430-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-054430-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-054430-m03: (1.970889638s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.05s)

                                                
                                    
x
+
TestPreload (134.45s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-271669 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E0908 13:19:34.966242  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-271669 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m3.906106933s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-271669 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-271669 image pull gcr.io/k8s-minikube/busybox: (3.586444632s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-271669
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-271669: (5.803104089s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-271669 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-271669 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (58.537589109s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-271669 image list
helpers_test.go:175: Cleaning up "test-preload-271669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-271669
E0908 13:21:29.298375  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-271669: (2.368831176s)
--- PASS: TestPreload (134.45s)

                                                
                                    
x
+
TestScheduledStopUnix (109.83s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-359259 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-359259 --memory=3072 --driver=docker  --container-runtime=crio: (33.620282217s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-359259 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-359259 -n scheduled-stop-359259
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-359259 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0908 13:22:05.181180  560849 retry.go:31] will retry after 61.374µs: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.181629  560849 retry.go:31] will retry after 198.42µs: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.181942  560849 retry.go:31] will retry after 179.965µs: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.182197  560849 retry.go:31] will retry after 219.047µs: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.183081  560849 retry.go:31] will retry after 711.121µs: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.184225  560849 retry.go:31] will retry after 840.264µs: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.185373  560849 retry.go:31] will retry after 1.35723ms: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.187586  560849 retry.go:31] will retry after 2.442814ms: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.190837  560849 retry.go:31] will retry after 1.768717ms: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.193117  560849 retry.go:31] will retry after 2.17959ms: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.196566  560849 retry.go:31] will retry after 7.935696ms: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.204806  560849 retry.go:31] will retry after 8.359942ms: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.213885  560849 retry.go:31] will retry after 13.669449ms: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.228080  560849 retry.go:31] will retry after 22.268789ms: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
I0908 13:22:05.250529  560849 retry.go:31] will retry after 41.640982ms: open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/scheduled-stop-359259/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-359259 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-359259 -n scheduled-stop-359259
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-359259
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-359259 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-359259
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-359259: exit status 7 (74.614191ms)

                                                
                                                
-- stdout --
	scheduled-stop-359259
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-359259 -n scheduled-stop-359259
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-359259 -n scheduled-stop-359259: exit status 7 (70.176181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-359259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-359259
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-359259: (4.619007261s)
--- PASS: TestScheduledStopUnix (109.83s)

                                                
                                    
x
+
TestInsufficientStorage (11.91s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-941723 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-941723 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.413287444s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"00b98602-b84c-4e6a-9519-ebbc726a9215","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-941723] minikube v1.36.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9774525f-13b8-40f7-9a66-ebe17b8892b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21508"}}
	{"specversion":"1.0","id":"7325bd86-31ff-49d2-ae5a-5629b047d849","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ce1dfe81-1f1f-40da-b237-58a44cb6e606","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig"}}
	{"specversion":"1.0","id":"76ba8b51-9e28-4aa1-8725-861c6e15aa34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube"}}
	{"specversion":"1.0","id":"e126d111-6074-43cc-9257-7478966c0c92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"1291c57e-0443-4635-83ff-0854e39023b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fb035d41-d715-47d8-991c-c2a802992186","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4c44266f-0fce-46c2-bdc6-7328e1a59bb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"50116208-5276-40bd-8537-62320846ac63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e55e2a96-63b0-49e3-a642-cf77f06471ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"93219ef0-efe8-4efc-ad4a-6c8f7e2ee8c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-941723\" primary control-plane node in \"insufficient-storage-941723\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b3e2679-79bc-4b2c-b4e2-4d727170fba1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.47-1756980985-21488 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6eb1ae0e-98e5-4439-9214-6bc3687d8001","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"90e66e8c-8290-4361-b52a-fce4ab4a2671","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-941723 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-941723 --output=json --layout=cluster: exit status 7 (292.997905ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-941723","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-941723","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 13:23:30.558829  700879 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-941723" does not appear in /home/jenkins/minikube-integration/21508-558996/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-941723 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-941723 --output=json --layout=cluster: exit status 7 (303.136876ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-941723","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-941723","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0908 13:23:30.863658  700944 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-941723" does not appear in /home/jenkins/minikube-integration/21508-558996/kubeconfig
	E0908 13:23:30.873641  700944 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/insufficient-storage-941723/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-941723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-941723
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-941723: (1.896248517s)
--- PASS: TestInsufficientStorage (11.91s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (56.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.532120184 start -p running-upgrade-288048 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.532120184 start -p running-upgrade-288048 --memory=3072 --vm-driver=docker  --container-runtime=crio: (35.438580006s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-288048 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-288048 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.877883194s)
helpers_test.go:175: Cleaning up "running-upgrade-288048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-288048
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-288048: (2.043220428s)
--- PASS: TestRunningBinaryUpgrade (56.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (364.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-106949 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-106949 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.861069782s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-106949
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-106949: (2.403034896s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-106949 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-106949 status --format={{.Host}}: exit status 7 (198.962223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-106949 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-106949 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m39.832994906s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-106949 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-106949 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-106949 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (134.226716ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-106949] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-106949
	    minikube start -p kubernetes-upgrade-106949 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1069492 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-106949 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-106949 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-106949 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.178477413s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-106949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-106949
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-106949: (2.820296238s)
--- PASS: TestKubernetesUpgrade (364.55s)

                                                
                                    
x
+
TestMissingContainerUpgrade (123.61s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4087491556 start -p missing-upgrade-494691 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4087491556 start -p missing-upgrade-494691 --memory=3072 --driver=docker  --container-runtime=crio: (1m7.57921318s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-494691
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-494691
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-494691 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-494691 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.785300832s)
helpers_test.go:175: Cleaning up "missing-upgrade-494691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-494691
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-494691: (2.289022412s)
--- PASS: TestMissingContainerUpgrade (123.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-553875 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-553875 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (97.386832ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-553875] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-553875 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-553875 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (48.777025624s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-553875 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (14.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-553875 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-553875 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (11.982344488s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-553875 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-553875 status -o json: exit status 2 (505.810162ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-553875","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-553875
E0908 13:24:34.966533  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-553875: (2.080709243s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (14.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-553875 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-553875 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.446485608s)
--- PASS: TestNoKubernetes/serial/Start (9.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-553875 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-553875 "sudo systemctl is-active --quiet service kubelet": exit status 1 (260.877103ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-553875
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-553875: (1.204828834s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-553875 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-553875 --driver=docker  --container-runtime=crio: (6.75588528s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-553875 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-553875 "sudo systemctl is-active --quiet service kubelet": exit status 1 (265.690921ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (60.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.507290762 start -p stopped-upgrade-573373 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.507290762 start -p stopped-upgrade-573373 --memory=3072 --vm-driver=docker  --container-runtime=crio: (38.639909152s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.507290762 -p stopped-upgrade-573373 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.507290762 -p stopped-upgrade-573373 stop: (1.269696485s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-573373 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0908 13:26:29.299301  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-573373 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.517890236s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (60.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-573373
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-573373: (1.203093724s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestPause/serial/Start (80.28s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-594010 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-594010 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m20.281942301s)
--- PASS: TestPause/serial/Start (80.28s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (17.53s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-594010 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-594010 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.512124306s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (17.53s)

                                                
                                    
x
+
TestPause/serial/Pause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-594010 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.79s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-594010 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-594010 --output=json --layout=cluster: exit status 2 (309.19864ms)

                                                
                                                
-- stdout --
	{"Name":"pause-594010","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.36.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-594010","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-594010 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.87s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-594010 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.87s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.69s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-594010 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-594010 --alsologtostderr -v=5: (2.688030091s)
--- PASS: TestPause/serial/DeletePaused (2.69s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (13.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (13.330281502s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-594010
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-594010: exit status 1 (21.46835ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-594010: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (13.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-182649 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-182649 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (244.915786ms)

                                                
                                                
-- stdout --
	* [false-182649] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21508
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0908 13:30:19.262246  738622 out.go:360] Setting OutFile to fd 1 ...
	I0908 13:30:19.262432  738622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:30:19.262443  738622 out.go:374] Setting ErrFile to fd 2...
	I0908 13:30:19.262449  738622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0908 13:30:19.262740  738622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21508-558996/.minikube/bin
	I0908 13:30:19.263303  738622 out.go:368] Setting JSON to false
	I0908 13:30:19.264317  738622 start.go:130] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11572,"bootTime":1757326648,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I0908 13:30:19.264387  738622 start.go:140] virtualization:  
	I0908 13:30:19.268011  738622 out.go:179] * [false-182649] minikube v1.36.0 on Ubuntu 20.04 (arm64)
	I0908 13:30:19.271277  738622 notify.go:220] Checking for updates...
	I0908 13:30:19.271252  738622 out.go:179]   - MINIKUBE_LOCATION=21508
	I0908 13:30:19.275461  738622 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0908 13:30:19.278533  738622 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21508-558996/kubeconfig
	I0908 13:30:19.281560  738622 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21508-558996/.minikube
	I0908 13:30:19.285021  738622 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0908 13:30:19.287790  738622 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0908 13:30:19.291103  738622 config.go:182] Loaded profile config "kubernetes-upgrade-106949": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0908 13:30:19.291208  738622 driver.go:421] Setting default libvirt URI to qemu:///system
	I0908 13:30:19.315518  738622 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0908 13:30:19.315641  738622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0908 13:30:19.418255  738622 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-08 13:30:19.408053765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0908 13:30:19.419500  738622 docker.go:318] overlay module found
	I0908 13:30:19.423942  738622 out.go:179] * Using the docker driver based on user configuration
	I0908 13:30:19.426931  738622 start.go:304] selected driver: docker
	I0908 13:30:19.426963  738622 start.go:918] validating driver "docker" against <nil>
	I0908 13:30:19.426980  738622 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0908 13:30:19.430610  738622 out.go:203] 
	W0908 13:30:19.433594  738622 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0908 13:30:19.437162  738622 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-182649 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-182649

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-182649

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-182649

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-182649

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-182649

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-182649

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-182649

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-182649

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-182649

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-182649

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-182649

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-182649" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-182649" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-558996/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 13:25:58 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-106949
contexts:
- context:
cluster: kubernetes-upgrade-106949
user: kubernetes-upgrade-106949
name: kubernetes-upgrade-106949
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-106949
user:
client-certificate: /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/kubernetes-upgrade-106949/client.crt
client-key: /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/kubernetes-upgrade-106949/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-182649

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-182649"

                                                
                                                
----------------------- debugLogs end: false-182649 [took: 4.720924277s] --------------------------------
helpers_test.go:175: Cleaning up "false-182649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-182649
--- PASS: TestNetworkPlugins/group/false (5.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-191694 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E0908 13:32:38.043030  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-191694 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.465700879s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-191694 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d5503c9c-3cc0-4043-91ca-05828d3e5c32] Pending
helpers_test.go:352: "busybox" [d5503c9c-3cc0-4043-91ca-05828d3e5c32] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d5503c9c-3cc0-4043-91ca-05828d3e5c32] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004132592s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-191694 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-191694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-191694 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.043350668s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-191694 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-191694 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-191694 --alsologtostderr -v=3: (11.949431422s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-191694 -n old-k8s-version-191694
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-191694 -n old-k8s-version-191694: exit status 7 (71.730481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-191694 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (52.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-191694 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-191694 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.978582349s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-191694 -n old-k8s-version-191694
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (52.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-42kmv" [ee8f1399-ec17-4b1e-a70f-816f55c7d169] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003437624s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-42kmv" [ee8f1399-ec17-4b1e-a70f-816f55c7d169] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003348338s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-191694 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-191694 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-191694 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-191694 -n old-k8s-version-191694
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-191694 -n old-k8s-version-191694: exit status 2 (322.41073ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-191694 -n old-k8s-version-191694
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-191694 -n old-k8s-version-191694: exit status 2 (338.335041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-191694 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-191694 -n old-k8s-version-191694
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-191694 -n old-k8s-version-191694
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-848296 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 13:34:34.966260  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-848296 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m12.363492127s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (79.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-042142 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-042142 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m19.885422668s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (79.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-848296 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [93e343b6-c2d3-4c56-9406-62a82758deac] Pending
helpers_test.go:352: "busybox" [93e343b6-c2d3-4c56-9406-62a82758deac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [93e343b6-c2d3-4c56-9406-62a82758deac] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004652352s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-848296 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-848296 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-848296 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.156612995s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-848296 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-848296 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-848296 --alsologtostderr -v=3: (12.063755526s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-848296 -n no-preload-848296
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-848296 -n no-preload-848296: exit status 7 (77.3744ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-848296 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-848296 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 13:36:29.299378  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-848296 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (55.0463485s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-848296 -n no-preload-848296
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-042142 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [074ded95-92db-4db0-b77b-7257b9b4132e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [074ded95-92db-4db0-b77b-7257b9b4132e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003577568s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-042142 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-042142 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-042142 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.205813619s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-042142 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-042142 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-042142 --alsologtostderr -v=3: (11.99080108s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hdndd" [5d47a7ae-7efa-49c2-b5f2-ca3facb846e6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002853367s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-042142 -n embed-certs-042142
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-042142 -n embed-certs-042142: exit status 7 (81.688173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-042142 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-042142 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-042142 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (52.971663676s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-042142 -n embed-certs-042142
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hdndd" [5d47a7ae-7efa-49c2-b5f2-ca3facb846e6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004075823s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-848296 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-848296 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-848296 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-848296 --alsologtostderr -v=1: (1.303958713s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-848296 -n no-preload-848296
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-848296 -n no-preload-848296: exit status 2 (490.469204ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-848296 -n no-preload-848296
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-848296 -n no-preload-848296: exit status 2 (452.67108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-848296 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-848296 --alsologtostderr -v=1: (1.228978738s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-848296 -n no-preload-848296
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-848296 -n no-preload-848296
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-000477 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 13:37:56.072677  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:56.079094  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:56.090443  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:56.111806  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:56.153130  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:56.234489  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:56.395936  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:56.717521  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:57.358952  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:37:58.640447  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-000477 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m28.50265293s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c4ztf" [13b86aad-e255-441c-bbd2-6e5428c82b65] Running
E0908 13:38:01.202243  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003146651s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c4ztf" [13b86aad-e255-441c-bbd2-6e5428c82b65] Running
E0908 13:38:06.323811  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00362888s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-042142 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-042142 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-042142 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-042142 -n embed-certs-042142
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-042142 -n embed-certs-042142: exit status 2 (328.067095ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-042142 -n embed-certs-042142
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-042142 -n embed-certs-042142: exit status 2 (343.749773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-042142 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-042142 -n embed-certs-042142
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-042142 -n embed-certs-042142
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-638342 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 13:38:37.046726  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-638342 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (36.070219565s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-000477 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d20a6d4f-8063-4622-a794-bc91190c49be] Pending
helpers_test.go:352: "busybox" [d20a6d4f-8063-4622-a794-bc91190c49be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d20a6d4f-8063-4622-a794-bc91190c49be] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.004512139s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-000477 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-638342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-638342 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.165547019s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-638342 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-638342 --alsologtostderr -v=3: (1.234517143s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-638342 -n newest-cni-638342
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-638342 -n newest-cni-638342: exit status 7 (78.850535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-638342 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-638342 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-638342 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (17.005310603s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-638342 -n newest-cni-638342
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-000477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-000477 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.346405678s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-000477 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-000477 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-000477 --alsologtostderr -v=3: (12.344534573s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-638342 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-638342 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-638342 -n newest-cni-638342
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-638342 -n newest-cni-638342: exit status 2 (332.369334ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-638342 -n newest-cni-638342
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-638342 -n newest-cni-638342: exit status 2 (304.313855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-638342 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-638342 -n newest-cni-638342
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-638342 -n newest-cni-638342
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-000477 -n default-k8s-diff-port-000477
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-000477 -n default-k8s-diff-port-000477: exit status 7 (139.577161ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-000477 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (64.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-000477 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
E0908 13:39:18.008153  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-000477 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m3.663556509s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-000477 -n default-k8s-diff-port-000477
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (64.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0908 13:39:34.966422  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m27.395048183s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jcnrl" [050f2a3c-69c2-4a40-99c9-914eb4635f26] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004294572s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jcnrl" [050f2a3c-69c2-4a40-99c9-914eb4635f26] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003359954s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-000477 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-000477 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-000477 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-000477 -n default-k8s-diff-port-000477
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-000477 -n default-k8s-diff-port-000477: exit status 2 (347.323299ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-000477 -n default-k8s-diff-port-000477
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-000477 -n default-k8s-diff-port-000477: exit status 2 (313.617413ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-000477 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-000477 -n default-k8s-diff-port-000477
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-000477 -n default-k8s-diff-port-000477
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0908 13:40:39.929979  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:43.283744  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:43.290280  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:43.301748  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:43.323266  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:43.364607  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:43.445961  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:43.607373  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:43.929157  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:44.571554  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:40:45.852959  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m27.946812651s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-182649 "pgrep -a kubelet"
I0908 13:40:46.555300  560849 config.go:182] Loaded profile config "auto-182649": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-182649 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pwc22" [ccee8d3d-db2d-4433-8b35-9956e5881c1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 13:40:48.414626  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-pwc22" [ccee8d3d-db2d-4433-8b35-9956e5881c1d] Running
E0908 13:40:53.535920  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.00610706s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-182649 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0908 13:41:24.263535  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:41:29.298497  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:42:05.225608  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m0.973418668s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-rzth4" [f8a6092e-add5-494c-a606-af8c861490b7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004678878s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-182649 "pgrep -a kubelet"
I0908 13:42:12.521707  560849 config.go:182] Loaded profile config "kindnet-182649": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-182649 replace --force -f testdata/netcat-deployment.yaml
I0908 13:42:12.834685  560849 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hbltv" [21db12d6-6a66-4ace-b7fa-0b5a8ae01950] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hbltv" [21db12d6-6a66-4ace-b7fa-0b5a8ae01950] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004107336s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-182649 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-xmrr2" [823c48f8-ea8c-4d90-ac28-22ab63c732c2] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-xmrr2" [823c48f8-ea8c-4d90-ac28-22ab63c732c2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00484641s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-182649 "pgrep -a kubelet"
I0908 13:42:31.076190  560849 config.go:182] Loaded profile config "calico-182649": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-182649 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-28wjm" [ffcf2b6e-e23b-4041-900c-bfcc63277002] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-28wjm" [ffcf2b6e-e23b-4041-900c-bfcc63277002] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004789465s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-182649 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0908 13:42:56.074118  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m1.282032088s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0908 13:43:23.772003  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/old-k8s-version-191694/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:27.147572  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m19.106454953s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-182649 "pgrep -a kubelet"
I0908 13:43:49.302964  560849 config.go:182] Loaded profile config "custom-flannel-182649": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-182649 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m5q5r" [062bd937-53bc-4dfa-9a3b-cb5d97924e0b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 13:43:51.130706  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:51.137598  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:51.149054  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:51.170628  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:51.212003  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:51.293546  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:51.455743  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:51.778144  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:52.419821  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:43:53.701914  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-m5q5r" [062bd937-53bc-4dfa-9a3b-cb5d97924e0b] Running
E0908 13:43:56.263257  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:44:01.384994  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004293506s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-182649 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (92.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m32.549917993s)
--- PASS: TestNetworkPlugins/group/flannel/Start (92.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-182649 "pgrep -a kubelet"
I0908 13:44:31.279117  560849 config.go:182] Loaded profile config "enable-default-cni-182649": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-182649 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lvngm" [9a4130ff-8c71-4905-9109-7ae02e0d5594] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 13:44:32.108220  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:44:34.965797  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/functional-491794/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-lvngm" [9a4130ff-8c71-4905-9109-7ae02e0d5594] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003547976s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-182649 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0908 13:45:13.070342  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/default-k8s-diff-port-000477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:43.284069  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:46.797362  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:46.803729  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:46.815031  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:46.836402  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:46.877794  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:46.959127  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:47.120473  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:47.441846  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:48.083927  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:49.366149  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0908 13:45:51.927959  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-182649 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m12.801810437s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-mpfvh" [76ffb869-88f1-4a8e-8b0a-0ac807e3b47c] Running
E0908 13:45:57.049667  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00359323s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-182649 "pgrep -a kubelet"
I0908 13:46:02.404842  560849 config.go:182] Loaded profile config "flannel-182649": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-182649 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lnprt" [d003bb13-49e0-4e39-84af-e281ce33ea78] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 13:46:07.291131  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-lnprt" [d003bb13-49e0-4e39-84af-e281ce33ea78] Running
E0908 13:46:10.989532  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/no-preload-848296/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.0037753s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-182649 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-182649 "pgrep -a kubelet"
I0908 13:46:22.020005  560849 config.go:182] Loaded profile config "bridge-182649": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-182649 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-676vq" [4d8baae7-6740-415c-948e-df6780178953] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0908 13:46:27.772413  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/auto-182649/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-676vq" [4d8baae7-6740-415c-948e-df6780178953] Running
E0908 13:46:29.298859  560849 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/addons-090979/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005048028s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-182649 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-182649 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    

Test skip (32/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-574217 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-574217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-574217
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-090979 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-530102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-530102
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-182649 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-182649

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-182649

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-182649

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-182649

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-182649

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-182649

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-182649

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-182649

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-182649

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-182649

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-182649

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-182649" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-182649" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-558996/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 13:25:58 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-106949
contexts:
- context:
cluster: kubernetes-upgrade-106949
user: kubernetes-upgrade-106949
name: kubernetes-upgrade-106949
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-106949
user:
client-certificate: /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/kubernetes-upgrade-106949/client.crt
client-key: /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/kubernetes-upgrade-106949/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-182649

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-182649"

                                                
                                                
----------------------- debugLogs end: kubenet-182649 [took: 5.50025702s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-182649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-182649
--- SKIP: TestNetworkPlugins/group/kubenet (5.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-182649 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-182649" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21508-558996/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Sep 2025 13:30:23 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-106949
contexts:
- context:
cluster: kubernetes-upgrade-106949
extensions:
- extension:
last-update: Mon, 08 Sep 2025 13:30:23 UTC
provider: minikube.sigs.k8s.io
version: v1.36.0
name: context_info
namespace: default
user: kubernetes-upgrade-106949
name: kubernetes-upgrade-106949
current-context: kubernetes-upgrade-106949
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-106949
user:
client-certificate: /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/kubernetes-upgrade-106949/client.crt
client-key: /home/jenkins/minikube-integration/21508-558996/.minikube/profiles/kubernetes-upgrade-106949/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-182649

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-182649" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-182649"

                                                
                                                
----------------------- debugLogs end: cilium-182649 [took: 5.819149731s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-182649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-182649
--- SKIP: TestNetworkPlugins/group/cilium (6.02s)

                                                
                                    
Copied to clipboard