Test Report: Docker_Linux 20451

                    
                      3de5109224746595ef816ce07f095d1725de7bd9:2025-02-24:38483
                    
                

Test fail (1/346)

Order failed test Duration
35 TestAddons/parallel/Registry 72.51
x
+
TestAddons/parallel/Registry (72.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 24.793523ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-mkfm8" [997e9970-e3ba-46f8-a564-dca79745389d] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002822306s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-59cfj" [36568ecc-510f-46b5-8192-bc771e49bf12] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003701689s
addons_test.go:331: (dbg) Run:  kubectl --context addons-463362 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-463362 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Non-zero exit: kubectl --context addons-463362 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.074038353s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:338: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-463362 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:342: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 ip
2025/02/24 11:57:10 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-463362
helpers_test.go:235: (dbg) docker inspect addons-463362:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d1d319c72f3791c1f3f290d6548fe1a7c1f650f8411421fb0e7119c644ea9fe",
	        "Created": "2025-02-24T11:48:55.609882452Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 738095,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-02-24T11:48:55.638314288Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:61dca85afedfb4a78b22f2d660c8b9e1fe05d745080151dfb8c4fdd6e13072af",
	        "ResolvConfPath": "/var/lib/docker/containers/5d1d319c72f3791c1f3f290d6548fe1a7c1f650f8411421fb0e7119c644ea9fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d1d319c72f3791c1f3f290d6548fe1a7c1f650f8411421fb0e7119c644ea9fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d1d319c72f3791c1f3f290d6548fe1a7c1f650f8411421fb0e7119c644ea9fe/hosts",
	        "LogPath": "/var/lib/docker/containers/5d1d319c72f3791c1f3f290d6548fe1a7c1f650f8411421fb0e7119c644ea9fe/5d1d319c72f3791c1f3f290d6548fe1a7c1f650f8411421fb0e7119c644ea9fe-json.log",
	        "Name": "/addons-463362",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-463362:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-463362",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5d1d319c72f3791c1f3f290d6548fe1a7c1f650f8411421fb0e7119c644ea9fe",
	                "LowerDir": "/var/lib/docker/overlay2/37c19db2bafea3a217a92f518b354d77c73d2bf33b0f8c2bcf67ce672c0ef50a-init/diff:/var/lib/docker/overlay2/05c155cce2cfc44faa83686ccfdc4bd5e938395c8152bb1fb202837c7cbe7dab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/37c19db2bafea3a217a92f518b354d77c73d2bf33b0f8c2bcf67ce672c0ef50a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/37c19db2bafea3a217a92f518b354d77c73d2bf33b0f8c2bcf67ce672c0ef50a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/37c19db2bafea3a217a92f518b354d77c73d2bf33b0f8c2bcf67ce672c0ef50a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-463362",
	                "Source": "/var/lib/docker/volumes/addons-463362/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-463362",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-463362",
	                "name.minikube.sigs.k8s.io": "addons-463362",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e32b5e153dba752f422836dd18793349f27b441850a251af37da22e0d0ae88ed",
	            "SandboxKey": "/var/run/docker/netns/e32b5e153dba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-463362": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:20:9b:c6:a4:5b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1098c32c14647d059782656419ef4271008d05a22bf37b44c7616125648ddd69",
	                    "EndpointID": "857a077579bf2c852bddb073ba2ec08ad46c43ff478a533eee5e34dbdec30f66",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-463362",
	                        "5d1d319c72f3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-463362 -n addons-463362
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-193212   | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC |                     |
	|         | -p download-only-193212              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC | 24 Feb 25 11:48 UTC |
	| delete  | -p download-only-193212              | download-only-193212   | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC | 24 Feb 25 11:48 UTC |
	| start   | -o=json --download-only              | download-only-597495   | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC |                     |
	|         | -p download-only-597495              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC | 24 Feb 25 11:48 UTC |
	| delete  | -p download-only-597495              | download-only-597495   | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC | 24 Feb 25 11:48 UTC |
	| delete  | -p download-only-193212              | download-only-193212   | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC | 24 Feb 25 11:48 UTC |
	| delete  | -p download-only-597495              | download-only-597495   | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC | 24 Feb 25 11:48 UTC |
	| start   | --download-only -p                   | download-docker-598050 | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC |                     |
	|         | download-docker-598050               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-598050            | download-docker-598050 | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC | 24 Feb 25 11:48 UTC |
	| start   | --download-only -p                   | binary-mirror-957722   | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC |                     |
	|         | binary-mirror-957722                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37367               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-957722              | binary-mirror-957722   | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC | 24 Feb 25 11:48 UTC |
	| addons  | enable dashboard -p                  | addons-463362          | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC |                     |
	|         | addons-463362                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-463362          | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC |                     |
	|         | addons-463362                        |                        |         |         |                     |                     |
	| start   | -p addons-463362 --wait=true         | addons-463362          | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC | 24 Feb 25 11:55 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-463362 addons disable         | addons-463362          | jenkins | v1.35.0 | 24 Feb 25 11:55 UTC | 24 Feb 25 11:55 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-463362 addons disable         | addons-463362          | jenkins | v1.35.0 | 24 Feb 25 11:55 UTC | 24 Feb 25 11:56 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-463362          | jenkins | v1.35.0 | 24 Feb 25 11:56 UTC | 24 Feb 25 11:56 UTC |
	|         | -p addons-463362                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-463362 addons                 | addons-463362          | jenkins | v1.35.0 | 24 Feb 25 11:56 UTC | 24 Feb 25 11:56 UTC |
	|         | disable nvidia-device-plugin         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-463362 addons disable         | addons-463362          | jenkins | v1.35.0 | 24 Feb 25 11:56 UTC | 24 Feb 25 11:56 UTC |
	|         | amd-gpu-device-plugin                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-463362 addons                 | addons-463362          | jenkins | v1.35.0 | 24 Feb 25 11:56 UTC | 24 Feb 25 11:56 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-463362 addons disable         | addons-463362          | jenkins | v1.35.0 | 24 Feb 25 11:56 UTC | 24 Feb 25 11:56 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-463362 addons                 | addons-463362          | jenkins | v1.35.0 | 24 Feb 25 11:56 UTC | 24 Feb 25 11:56 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-463362 ip                     | addons-463362          | jenkins | v1.35.0 | 24 Feb 25 11:57 UTC | 24 Feb 25 11:57 UTC |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 11:48:32
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 11:48:32.240019  737494 out.go:345] Setting OutFile to fd 1 ...
	I0224 11:48:32.240311  737494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 11:48:32.240323  737494 out.go:358] Setting ErrFile to fd 2...
	I0224 11:48:32.240327  737494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 11:48:32.240514  737494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
	I0224 11:48:32.241119  737494 out.go:352] Setting JSON to false
	I0224 11:48:32.242076  737494 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":66661,"bootTime":1740331051,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 11:48:32.242153  737494 start.go:139] virtualization: kvm guest
	I0224 11:48:32.243904  737494 out.go:177] * [addons-463362] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 11:48:32.245000  737494 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 11:48:32.245009  737494 notify.go:220] Checking for updates...
	I0224 11:48:32.247220  737494 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 11:48:32.248306  737494 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-729451/kubeconfig
	I0224 11:48:32.249331  737494 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-729451/.minikube
	I0224 11:48:32.250336  737494 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 11:48:32.251412  737494 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 11:48:32.252776  737494 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 11:48:32.273532  737494 docker.go:123] docker version: linux-28.0.0:Docker Engine - Community
	I0224 11:48:32.273616  737494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 11:48:32.319888  737494 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-02-24 11:48:32.311507282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 11:48:32.319998  737494 docker.go:318] overlay module found
	I0224 11:48:32.321655  737494 out.go:177] * Using the docker driver based on user configuration
	I0224 11:48:32.322806  737494 start.go:297] selected driver: docker
	I0224 11:48:32.322825  737494 start.go:901] validating driver "docker" against <nil>
	I0224 11:48:32.322838  737494 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 11:48:32.323682  737494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 11:48:32.370023  737494 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2025-02-24 11:48:32.36175268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 11:48:32.370182  737494 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0224 11:48:32.370427  737494 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 11:48:32.371927  737494 out.go:177] * Using Docker driver with root privileges
	I0224 11:48:32.372886  737494 cni.go:84] Creating CNI manager for ""
	I0224 11:48:32.372946  737494 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 11:48:32.372958  737494 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0224 11:48:32.373014  737494 start.go:340] cluster config:
	{Name:addons-463362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-463362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 11:48:32.374046  737494 out.go:177] * Starting "addons-463362" primary control-plane node in "addons-463362" cluster
	I0224 11:48:32.374912  737494 cache.go:121] Beginning downloading kic base image for docker with docker
	I0224 11:48:32.375934  737494 out.go:177] * Pulling base image v0.0.46-1740046583-20436 ...
	I0224 11:48:32.376901  737494 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0224 11:48:32.376929  737494 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-729451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0224 11:48:32.376929  737494 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local docker daemon
	I0224 11:48:32.376933  737494 cache.go:56] Caching tarball of preloaded images
	I0224 11:48:32.377094  737494 preload.go:172] Found /home/jenkins/minikube-integration/20451-729451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0224 11:48:32.377104  737494 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0224 11:48:32.377431  737494 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/config.json ...
	I0224 11:48:32.377458  737494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/config.json: {Name:mk961127798c88ca4673b269062deadfb4d11934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:48:32.392871  737494 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 to local cache
	I0224 11:48:32.392977  737494 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local cache directory
	I0224 11:48:32.392998  737494 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 in local cache directory, skipping pull
	I0224 11:48:32.393020  737494 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 exists in cache, skipping pull
	I0224 11:48:32.393031  737494 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 as a tarball
	I0224 11:48:32.393036  737494 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 from local cache
	I0224 11:48:44.297791  737494 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 from cached tarball
	I0224 11:48:44.297883  737494 cache.go:230] Successfully downloaded all kic artifacts
	I0224 11:48:44.297969  737494 start.go:360] acquireMachinesLock for addons-463362: {Name:mk6ccf4032afd6610f99c74b7ff712a1991e6cf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0224 11:48:44.298097  737494 start.go:364] duration metric: took 101.458µs to acquireMachinesLock for "addons-463362"
	I0224 11:48:44.298124  737494 start.go:93] Provisioning new machine with config: &{Name:addons-463362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-463362 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 11:48:44.298242  737494 start.go:125] createHost starting for "" (driver="docker")
	I0224 11:48:44.299881  737494 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0224 11:48:44.300152  737494 start.go:159] libmachine.API.Create for "addons-463362" (driver="docker")
	I0224 11:48:44.300220  737494 client.go:168] LocalClient.Create starting
	I0224 11:48:44.300388  737494 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20451-729451/.minikube/certs/ca.pem
	I0224 11:48:44.476664  737494 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20451-729451/.minikube/certs/cert.pem
	I0224 11:48:44.647818  737494 cli_runner.go:164] Run: docker network inspect addons-463362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0224 11:48:44.663615  737494 cli_runner.go:211] docker network inspect addons-463362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0224 11:48:44.663687  737494 network_create.go:284] running [docker network inspect addons-463362] to gather additional debugging logs...
	I0224 11:48:44.663708  737494 cli_runner.go:164] Run: docker network inspect addons-463362
	W0224 11:48:44.678947  737494 cli_runner.go:211] docker network inspect addons-463362 returned with exit code 1
	I0224 11:48:44.678976  737494 network_create.go:287] error running [docker network inspect addons-463362]: docker network inspect addons-463362: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-463362 not found
	I0224 11:48:44.678991  737494 network_create.go:289] output of [docker network inspect addons-463362]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-463362 not found
	
	** /stderr **
	I0224 11:48:44.679088  737494 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 11:48:44.693906  737494 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00206a7a0}
	I0224 11:48:44.693952  737494 network_create.go:124] attempt to create docker network addons-463362 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0224 11:48:44.694010  737494 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-463362 addons-463362
	I0224 11:48:44.738745  737494 network_create.go:108] docker network addons-463362 192.168.49.0/24 created
	I0224 11:48:44.738782  737494 kic.go:121] calculated static IP "192.168.49.2" for the "addons-463362" container
	I0224 11:48:44.738855  737494 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0224 11:48:44.753656  737494 cli_runner.go:164] Run: docker volume create addons-463362 --label name.minikube.sigs.k8s.io=addons-463362 --label created_by.minikube.sigs.k8s.io=true
	I0224 11:48:44.769663  737494 oci.go:103] Successfully created a docker volume addons-463362
	I0224 11:48:44.769737  737494 cli_runner.go:164] Run: docker run --rm --name addons-463362-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-463362 --entrypoint /usr/bin/test -v addons-463362:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 -d /var/lib
	I0224 11:48:51.581828  737494 cli_runner.go:217] Completed: docker run --rm --name addons-463362-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-463362 --entrypoint /usr/bin/test -v addons-463362:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 -d /var/lib: (6.812045331s)
	I0224 11:48:51.581863  737494 oci.go:107] Successfully prepared a docker volume addons-463362
	I0224 11:48:51.581886  737494 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0224 11:48:51.581917  737494 kic.go:194] Starting extracting preloaded images to volume ...
	I0224 11:48:51.581980  737494 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20451-729451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-463362:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0224 11:48:55.548509  737494 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20451-729451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-463362:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.966453686s)
	I0224 11:48:55.548558  737494 kic.go:203] duration metric: took 3.966637378s to extract preloaded images to volume ...
	W0224 11:48:55.548713  737494 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0224 11:48:55.548856  737494 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0224 11:48:55.594833  737494 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-463362 --name addons-463362 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-463362 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-463362 --network addons-463362 --ip 192.168.49.2 --volume addons-463362:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4
	I0224 11:48:55.867340  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Running}}
	I0224 11:48:55.885773  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:48:55.904256  737494 cli_runner.go:164] Run: docker exec addons-463362 stat /var/lib/dpkg/alternatives/iptables
	I0224 11:48:55.944739  737494 oci.go:144] the created container "addons-463362" has a running status.
	I0224 11:48:55.944780  737494 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa...
	I0224 11:48:56.054567  737494 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0224 11:48:56.074951  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:48:56.091995  737494 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0224 11:48:56.092016  737494 kic_runner.go:114] Args: [docker exec --privileged addons-463362 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0224 11:48:56.134189  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:48:56.155161  737494 machine.go:93] provisionDockerMachine start ...
	I0224 11:48:56.155278  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:48:56.175459  737494 main.go:141] libmachine: Using SSH client type: native
	I0224 11:48:56.175685  737494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0224 11:48:56.175701  737494 main.go:141] libmachine: About to run SSH command:
	hostname
	I0224 11:48:56.176469  737494 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51544->127.0.0.1:32768: read: connection reset by peer
	I0224 11:48:59.288463  737494 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-463362
	
	I0224 11:48:59.288502  737494 ubuntu.go:169] provisioning hostname "addons-463362"
	I0224 11:48:59.288572  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:48:59.305949  737494 main.go:141] libmachine: Using SSH client type: native
	I0224 11:48:59.306154  737494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0224 11:48:59.306172  737494 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-463362 && echo "addons-463362" | sudo tee /etc/hostname
	I0224 11:48:59.427759  737494 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-463362
	
	I0224 11:48:59.427843  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:48:59.444523  737494 main.go:141] libmachine: Using SSH client type: native
	I0224 11:48:59.444706  737494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0224 11:48:59.444725  737494 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-463362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-463362/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-463362' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0224 11:48:59.553143  737494 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0224 11:48:59.553195  737494 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20451-729451/.minikube CaCertPath:/home/jenkins/minikube-integration/20451-729451/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20451-729451/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20451-729451/.minikube}
	I0224 11:48:59.553240  737494 ubuntu.go:177] setting up certificates
	I0224 11:48:59.553252  737494 provision.go:84] configureAuth start
	I0224 11:48:59.553311  737494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-463362
	I0224 11:48:59.569724  737494 provision.go:143] copyHostCerts
	I0224 11:48:59.569818  737494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-729451/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20451-729451/.minikube/ca.pem (1078 bytes)
	I0224 11:48:59.569966  737494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-729451/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20451-729451/.minikube/cert.pem (1123 bytes)
	I0224 11:48:59.570061  737494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20451-729451/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20451-729451/.minikube/key.pem (1679 bytes)
	I0224 11:48:59.570191  737494 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20451-729451/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20451-729451/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20451-729451/.minikube/certs/ca-key.pem org=jenkins.addons-463362 san=[127.0.0.1 192.168.49.2 addons-463362 localhost minikube]
	I0224 11:48:59.738942  737494 provision.go:177] copyRemoteCerts
	I0224 11:48:59.738998  737494 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0224 11:48:59.739034  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:48:59.755214  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:48:59.837205  737494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-729451/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0224 11:48:59.858329  737494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-729451/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0224 11:48:59.878857  737494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-729451/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0224 11:48:59.899421  737494 provision.go:87] duration metric: took 346.151018ms to configureAuth
	I0224 11:48:59.899453  737494 ubuntu.go:193] setting minikube options for container-runtime
	I0224 11:48:59.899679  737494 config.go:182] Loaded profile config "addons-463362": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0224 11:48:59.899749  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:48:59.916564  737494 main.go:141] libmachine: Using SSH client type: native
	I0224 11:48:59.916754  737494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0224 11:48:59.916771  737494 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0224 11:49:00.025151  737494 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0224 11:49:00.025188  737494 ubuntu.go:71] root file system type: overlay
	I0224 11:49:00.025299  737494 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0224 11:49:00.025356  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:00.042375  737494 main.go:141] libmachine: Using SSH client type: native
	I0224 11:49:00.042544  737494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0224 11:49:00.042638  737494 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0224 11:49:00.163317  737494 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0224 11:49:00.163395  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:00.180130  737494 main.go:141] libmachine: Using SSH client type: native
	I0224 11:49:00.180359  737494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x865ca0] 0x868980 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0224 11:49:00.180389  737494 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0224 11:49:00.835942  737494 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-02-19 22:09:11.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-02-24 11:49:00.160456989 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0224 11:49:00.836009  737494 machine.go:96] duration metric: took 4.680817961s to provisionDockerMachine
	I0224 11:49:00.836023  737494 client.go:171] duration metric: took 16.535791156s to LocalClient.Create
	I0224 11:49:00.836043  737494 start.go:167] duration metric: took 16.535893189s to libmachine.API.Create "addons-463362"
	I0224 11:49:00.836051  737494 start.go:293] postStartSetup for "addons-463362" (driver="docker")
	I0224 11:49:00.836061  737494 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0224 11:49:00.836114  737494 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0224 11:49:00.836153  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:00.853029  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:00.937843  737494 ssh_runner.go:195] Run: cat /etc/os-release
	I0224 11:49:00.940873  737494 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0224 11:49:00.940903  737494 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0224 11:49:00.940911  737494 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0224 11:49:00.940917  737494 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0224 11:49:00.940929  737494 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-729451/.minikube/addons for local assets ...
	I0224 11:49:00.940979  737494 filesync.go:126] Scanning /home/jenkins/minikube-integration/20451-729451/.minikube/files for local assets ...
	I0224 11:49:00.941002  737494 start.go:296] duration metric: took 104.945169ms for postStartSetup
	I0224 11:49:00.941287  737494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-463362
	I0224 11:49:00.957624  737494 profile.go:143] Saving config to /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/config.json ...
	I0224 11:49:00.957859  737494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 11:49:00.957898  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:00.973783  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:01.053585  737494 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0224 11:49:01.057659  737494 start.go:128] duration metric: took 16.759394582s to createHost
	I0224 11:49:01.057687  737494 start.go:83] releasing machines lock for "addons-463362", held for 16.75957571s
	I0224 11:49:01.057773  737494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-463362
	I0224 11:49:01.073608  737494 ssh_runner.go:195] Run: cat /version.json
	I0224 11:49:01.073661  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:01.073707  737494 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0224 11:49:01.073771  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:01.090233  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:01.090592  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:01.168571  737494 ssh_runner.go:195] Run: systemctl --version
	I0224 11:49:01.235520  737494 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0224 11:49:01.239455  737494 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0224 11:49:01.261830  737494 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0224 11:49:01.261904  737494 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0224 11:49:01.286381  737494 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0224 11:49:01.286415  737494 start.go:495] detecting cgroup driver to use...
	I0224 11:49:01.286451  737494 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0224 11:49:01.286605  737494 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 11:49:01.300823  737494 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0224 11:49:01.309496  737494 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0224 11:49:01.318024  737494 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0224 11:49:01.318079  737494 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0224 11:49:01.326479  737494 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 11:49:01.334885  737494 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0224 11:49:01.343075  737494 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0224 11:49:01.351404  737494 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0224 11:49:01.359271  737494 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0224 11:49:01.367757  737494 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0224 11:49:01.376213  737494 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0224 11:49:01.384653  737494 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0224 11:49:01.391902  737494 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0224 11:49:01.391947  737494 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0224 11:49:01.403782  737494 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0224 11:49:01.411327  737494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 11:49:01.490953  737494 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0224 11:49:01.582113  737494 start.go:495] detecting cgroup driver to use...
	I0224 11:49:01.582169  737494 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0224 11:49:01.582222  737494 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0224 11:49:01.593403  737494 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0224 11:49:01.593459  737494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0224 11:49:01.604270  737494 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0224 11:49:01.619686  737494 ssh_runner.go:195] Run: which cri-dockerd
	I0224 11:49:01.622828  737494 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0224 11:49:01.631521  737494 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0224 11:49:01.647725  737494 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0224 11:49:01.728051  737494 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0224 11:49:01.824182  737494 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0224 11:49:01.824349  737494 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0224 11:49:01.841286  737494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 11:49:01.930678  737494 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0224 11:49:02.187888  737494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0224 11:49:02.198776  737494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0224 11:49:02.208760  737494 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0224 11:49:02.283299  737494 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0224 11:49:02.359159  737494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 11:49:02.432026  737494 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0224 11:49:02.443956  737494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0224 11:49:02.453500  737494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 11:49:02.528296  737494 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0224 11:49:02.588763  737494 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0224 11:49:02.588858  737494 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0224 11:49:02.592937  737494 start.go:563] Will wait 60s for crictl version
	I0224 11:49:02.592983  737494 ssh_runner.go:195] Run: which crictl
	I0224 11:49:02.596296  737494 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0224 11:49:02.628055  737494 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.0.0
	RuntimeApiVersion:  v1
	I0224 11:49:02.628113  737494 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 11:49:02.651313  737494 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0224 11:49:02.675261  737494 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 28.0.0 ...
	I0224 11:49:02.675354  737494 cli_runner.go:164] Run: docker network inspect addons-463362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0224 11:49:02.691282  737494 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0224 11:49:02.694780  737494 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 11:49:02.705057  737494 kubeadm.go:883] updating cluster {Name:addons-463362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-463362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0224 11:49:02.705191  737494 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0224 11:49:02.705253  737494 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 11:49:02.724309  737494 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 11:49:02.724334  737494 docker.go:619] Images already preloaded, skipping extraction
	I0224 11:49:02.724383  737494 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0224 11:49:02.742695  737494 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0224 11:49:02.742717  737494 cache_images.go:84] Images are preloaded, skipping loading
	I0224 11:49:02.742728  737494 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.32.2 docker true true} ...
	I0224 11:49:02.742829  737494 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-463362 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:addons-463362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0224 11:49:02.742883  737494 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0224 11:49:02.788779  737494 cni.go:84] Creating CNI manager for ""
	I0224 11:49:02.788814  737494 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 11:49:02.788842  737494 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0224 11:49:02.788870  737494 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-463362 NodeName:addons-463362 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0224 11:49:02.789046  737494 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-463362"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0224 11:49:02.789129  737494 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0224 11:49:02.797610  737494 binaries.go:44] Found k8s binaries, skipping transfer
	I0224 11:49:02.797669  737494 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0224 11:49:02.805675  737494 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0224 11:49:02.821954  737494 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0224 11:49:02.837950  737494 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2291 bytes)
	I0224 11:49:02.853829  737494 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0224 11:49:02.856911  737494 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0224 11:49:02.866494  737494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 11:49:02.940868  737494 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 11:49:02.953847  737494 certs.go:68] Setting up /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362 for IP: 192.168.49.2
	I0224 11:49:02.953870  737494 certs.go:194] generating shared ca certs ...
	I0224 11:49:02.953888  737494 certs.go:226] acquiring lock for ca certs: {Name:mk634bd91f9f93bc9e7ced82c267fcaf52342451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:49:02.954005  737494 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20451-729451/.minikube/ca.key
	I0224 11:49:03.180505  737494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-729451/.minikube/ca.crt ...
	I0224 11:49:03.180545  737494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-729451/.minikube/ca.crt: {Name:mk2e1410b7e5857ab7a53496f5dfb9f047963967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:49:03.180720  737494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-729451/.minikube/ca.key ...
	I0224 11:49:03.180731  737494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-729451/.minikube/ca.key: {Name:mkd3905df885cb4bd1b9e8c2a1301a47cd887664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:49:03.180816  737494 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20451-729451/.minikube/proxy-client-ca.key
	I0224 11:49:03.316870  737494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-729451/.minikube/proxy-client-ca.crt ...
	I0224 11:49:03.316906  737494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-729451/.minikube/proxy-client-ca.crt: {Name:mk4f2b60177d3f0c0c8bc3f7b8e28a62e81b13cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:49:03.317145  737494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-729451/.minikube/proxy-client-ca.key ...
	I0224 11:49:03.317162  737494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-729451/.minikube/proxy-client-ca.key: {Name:mk01dc03d28e44995ef2840730d078cd09c70910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:49:03.317266  737494 certs.go:256] generating profile certs ...
	I0224 11:49:03.317340  737494 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.key
	I0224 11:49:03.317355  737494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt with IP's: []
	I0224 11:49:03.452414  737494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt ...
	I0224 11:49:03.452451  737494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: {Name:mkd98afa6cbee2199070ed995a458a1d3836b704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:49:03.452627  737494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.key ...
	I0224 11:49:03.452638  737494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.key: {Name:mk7cd93ae011ea753c7e1d1c7eadb54f01b92ec4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:49:03.452708  737494 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/apiserver.key.db8d7b1b
	I0224 11:49:03.452726  737494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/apiserver.crt.db8d7b1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0224 11:49:03.566245  737494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/apiserver.crt.db8d7b1b ...
	I0224 11:49:03.566282  737494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/apiserver.crt.db8d7b1b: {Name:mka00657dcbabfffe5574e76b2f90ed0267bea75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:49:03.566456  737494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/apiserver.key.db8d7b1b ...
	I0224 11:49:03.566470  737494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/apiserver.key.db8d7b1b: {Name:mkcc575db5475da182c7b5e3ea8174ac833f851e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:49:03.566553  737494 certs.go:381] copying /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/apiserver.crt.db8d7b1b -> /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/apiserver.crt
	I0224 11:49:03.566644  737494 certs.go:385] copying /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/apiserver.key.db8d7b1b -> /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/apiserver.key
	I0224 11:49:03.566693  737494 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/proxy-client.key
	I0224 11:49:03.566710  737494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/proxy-client.crt with IP's: []
	I0224 11:49:03.775221  737494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/proxy-client.crt ...
	I0224 11:49:03.775256  737494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/proxy-client.crt: {Name:mk4fc54249ea97fb76f728f19eaddeeddee33b6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:49:03.775428  737494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/proxy-client.key ...
	I0224 11:49:03.775444  737494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/proxy-client.key: {Name:mk840b9a714b1351cc814c6a0106cae1172ef320 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:49:03.775640  737494 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-729451/.minikube/certs/ca-key.pem (1679 bytes)
	I0224 11:49:03.775678  737494 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-729451/.minikube/certs/ca.pem (1078 bytes)
	I0224 11:49:03.775703  737494 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-729451/.minikube/certs/cert.pem (1123 bytes)
	I0224 11:49:03.775727  737494 certs.go:484] found cert: /home/jenkins/minikube-integration/20451-729451/.minikube/certs/key.pem (1679 bytes)
	I0224 11:49:03.776350  737494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-729451/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0224 11:49:03.798404  737494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-729451/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0224 11:49:03.819362  737494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-729451/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0224 11:49:03.840037  737494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-729451/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0224 11:49:03.861445  737494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0224 11:49:03.882434  737494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0224 11:49:03.903640  737494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0224 11:49:03.924656  737494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0224 11:49:03.945691  737494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20451-729451/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0224 11:49:03.966720  737494 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0224 11:49:03.982506  737494 ssh_runner.go:195] Run: openssl version
	I0224 11:49:03.987492  737494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0224 11:49:03.995810  737494 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0224 11:49:03.998914  737494 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 24 11:49 /usr/share/ca-certificates/minikubeCA.pem
	I0224 11:49:03.998973  737494 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0224 11:49:04.004997  737494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0224 11:49:04.012987  737494 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0224 11:49:04.016056  737494 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0224 11:49:04.016115  737494 kubeadm.go:392] StartCluster: {Name:addons-463362 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:addons-463362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 11:49:04.016215  737494 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0224 11:49:04.033581  737494 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0224 11:49:04.041461  737494 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0224 11:49:04.049287  737494 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0224 11:49:04.049331  737494 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0224 11:49:04.056628  737494 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0224 11:49:04.056650  737494 kubeadm.go:157] found existing configuration files:
	
	I0224 11:49:04.056689  737494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0224 11:49:04.064287  737494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0224 11:49:04.064336  737494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0224 11:49:04.071657  737494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0224 11:49:04.079119  737494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0224 11:49:04.079160  737494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0224 11:49:04.086454  737494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0224 11:49:04.094080  737494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0224 11:49:04.094120  737494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0224 11:49:04.101252  737494 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0224 11:49:04.108471  737494 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0224 11:49:04.108509  737494 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0224 11:49:04.115823  737494 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0224 11:49:04.168239  737494 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0224 11:49:04.168532  737494 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1075-gcp\n", err: exit status 1
	I0224 11:49:04.220523  737494 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0224 11:49:13.397547  737494 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0224 11:49:13.397599  737494 kubeadm.go:310] [preflight] Running pre-flight checks
	I0224 11:49:13.397674  737494 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0224 11:49:13.397730  737494 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1075-gcp
	I0224 11:49:13.397761  737494 kubeadm.go:310] OS: Linux
	I0224 11:49:13.397802  737494 kubeadm.go:310] CGROUPS_CPU: enabled
	I0224 11:49:13.397887  737494 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0224 11:49:13.397960  737494 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0224 11:49:13.398005  737494 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0224 11:49:13.398090  737494 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0224 11:49:13.398181  737494 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0224 11:49:13.398249  737494 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0224 11:49:13.398324  737494 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0224 11:49:13.398391  737494 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0224 11:49:13.398490  737494 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0224 11:49:13.398609  737494 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0224 11:49:13.398733  737494 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0224 11:49:13.398826  737494 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0224 11:49:13.400936  737494 out.go:235]   - Generating certificates and keys ...
	I0224 11:49:13.401024  737494 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0224 11:49:13.401106  737494 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0224 11:49:13.401228  737494 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0224 11:49:13.401302  737494 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0224 11:49:13.401360  737494 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0224 11:49:13.401405  737494 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0224 11:49:13.401460  737494 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0224 11:49:13.401580  737494 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-463362 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0224 11:49:13.401664  737494 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0224 11:49:13.401793  737494 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-463362 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0224 11:49:13.401895  737494 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0224 11:49:13.401968  737494 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0224 11:49:13.402008  737494 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0224 11:49:13.402055  737494 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0224 11:49:13.402115  737494 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0224 11:49:13.402184  737494 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0224 11:49:13.402235  737494 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0224 11:49:13.402292  737494 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0224 11:49:13.402342  737494 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0224 11:49:13.402417  737494 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0224 11:49:13.402503  737494 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0224 11:49:13.403693  737494 out.go:235]   - Booting up control plane ...
	I0224 11:49:13.403782  737494 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0224 11:49:13.403858  737494 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0224 11:49:13.403953  737494 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0224 11:49:13.404053  737494 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0224 11:49:13.404128  737494 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0224 11:49:13.404163  737494 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0224 11:49:13.404292  737494 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0224 11:49:13.404405  737494 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0224 11:49:13.404457  737494 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.714642ms
	I0224 11:49:13.404517  737494 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0224 11:49:13.404575  737494 kubeadm.go:310] [api-check] The API server is healthy after 4.500950405s
	I0224 11:49:13.404678  737494 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0224 11:49:13.404787  737494 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0224 11:49:13.404874  737494 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0224 11:49:13.405044  737494 kubeadm.go:310] [mark-control-plane] Marking the node addons-463362 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0224 11:49:13.405100  737494 kubeadm.go:310] [bootstrap-token] Using token: mp1kd2.i5itmk5v6hxa47wl
	I0224 11:49:13.406973  737494 out.go:235]   - Configuring RBAC rules ...
	I0224 11:49:13.407080  737494 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0224 11:49:13.407169  737494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0224 11:49:13.407316  737494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0224 11:49:13.407463  737494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0224 11:49:13.407577  737494 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0224 11:49:13.407671  737494 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0224 11:49:13.407787  737494 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0224 11:49:13.407826  737494 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0224 11:49:13.407878  737494 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0224 11:49:13.407884  737494 kubeadm.go:310] 
	I0224 11:49:13.407933  737494 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0224 11:49:13.407939  737494 kubeadm.go:310] 
	I0224 11:49:13.408005  737494 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0224 11:49:13.408011  737494 kubeadm.go:310] 
	I0224 11:49:13.408031  737494 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0224 11:49:13.408086  737494 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0224 11:49:13.408134  737494 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0224 11:49:13.408140  737494 kubeadm.go:310] 
	I0224 11:49:13.408185  737494 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0224 11:49:13.408191  737494 kubeadm.go:310] 
	I0224 11:49:13.408230  737494 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0224 11:49:13.408239  737494 kubeadm.go:310] 
	I0224 11:49:13.408290  737494 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0224 11:49:13.408373  737494 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0224 11:49:13.408479  737494 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0224 11:49:13.408491  737494 kubeadm.go:310] 
	I0224 11:49:13.408586  737494 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0224 11:49:13.408687  737494 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0224 11:49:13.408697  737494 kubeadm.go:310] 
	I0224 11:49:13.408779  737494 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mp1kd2.i5itmk5v6hxa47wl \
	I0224 11:49:13.408882  737494 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4383786f785035cd6f1aafa06721bc731d78658f8a8be1fa8dd3906229bc8be3 \
	I0224 11:49:13.408912  737494 kubeadm.go:310] 	--control-plane 
	I0224 11:49:13.408918  737494 kubeadm.go:310] 
	I0224 11:49:13.408990  737494 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0224 11:49:13.408996  737494 kubeadm.go:310] 
	I0224 11:49:13.409070  737494 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mp1kd2.i5itmk5v6hxa47wl \
	I0224 11:49:13.409224  737494 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4383786f785035cd6f1aafa06721bc731d78658f8a8be1fa8dd3906229bc8be3 
	I0224 11:49:13.409242  737494 cni.go:84] Creating CNI manager for ""
	I0224 11:49:13.409265  737494 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0224 11:49:13.410534  737494 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0224 11:49:13.411626  737494 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0224 11:49:13.420181  737494 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0224 11:49:13.436209  737494 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0224 11:49:13.436275  737494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 11:49:13.436314  737494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-463362 minikube.k8s.io/updated_at=2025_02_24T11_49_13_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=b76650f53499dbb51707efa4a87e94b72d747650 minikube.k8s.io/name=addons-463362 minikube.k8s.io/primary=true
	I0224 11:49:13.500353  737494 ops.go:34] apiserver oom_adj: -16
	I0224 11:49:13.516668  737494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 11:49:14.017316  737494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 11:49:14.516885  737494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 11:49:15.017566  737494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 11:49:15.517039  737494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 11:49:16.017433  737494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 11:49:16.517707  737494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 11:49:17.017323  737494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 11:49:17.517393  737494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 11:49:18.017436  737494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0224 11:49:18.083018  737494 kubeadm.go:1113] duration metric: took 4.646795722s to wait for elevateKubeSystemPrivileges
	I0224 11:49:18.083058  737494 kubeadm.go:394] duration metric: took 14.066946791s to StartCluster
	I0224 11:49:18.083083  737494 settings.go:142] acquiring lock: {Name:mk8bea5ae7a06d1424278239bf5252b25b2c798f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:49:18.083203  737494 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20451-729451/kubeconfig
	I0224 11:49:18.083712  737494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20451-729451/kubeconfig: {Name:mkfdbf09dc88810cd4abd55028aebef4b78813fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0224 11:49:18.083940  737494 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0224 11:49:18.084105  737494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0224 11:49:18.084132  737494 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0224 11:49:18.084273  737494 addons.go:69] Setting yakd=true in profile "addons-463362"
	I0224 11:49:18.084282  737494 addons.go:69] Setting gcp-auth=true in profile "addons-463362"
	I0224 11:49:18.084300  737494 addons.go:238] Setting addon yakd=true in "addons-463362"
	I0224 11:49:18.084302  737494 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-463362"
	I0224 11:49:18.084314  737494 mustload.go:65] Loading cluster: addons-463362
	I0224 11:49:18.084316  737494 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-463362"
	I0224 11:49:18.084330  737494 addons.go:69] Setting inspektor-gadget=true in profile "addons-463362"
	I0224 11:49:18.084339  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.084340  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.084348  737494 addons.go:238] Setting addon inspektor-gadget=true in "addons-463362"
	I0224 11:49:18.084346  737494 config.go:182] Loaded profile config "addons-463362": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0224 11:49:18.084380  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.084403  737494 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-463362"
	I0224 11:49:18.084444  737494 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-463362"
	I0224 11:49:18.084470  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.084535  737494 config.go:182] Loaded profile config "addons-463362": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0224 11:49:18.084548  737494 addons.go:69] Setting default-storageclass=true in profile "addons-463362"
	I0224 11:49:18.084580  737494 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-463362"
	I0224 11:49:18.084812  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.084863  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.084894  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.084903  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.084903  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.084934  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.085082  737494 addons.go:69] Setting metrics-server=true in profile "addons-463362"
	I0224 11:49:18.085156  737494 addons.go:238] Setting addon metrics-server=true in "addons-463362"
	I0224 11:49:18.085232  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.085521  737494 addons.go:69] Setting registry=true in profile "addons-463362"
	I0224 11:49:18.085544  737494 addons.go:238] Setting addon registry=true in "addons-463362"
	I0224 11:49:18.085575  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.085907  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.086055  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.086505  737494 addons.go:69] Setting volumesnapshots=true in profile "addons-463362"
	I0224 11:49:18.086571  737494 addons.go:238] Setting addon volumesnapshots=true in "addons-463362"
	I0224 11:49:18.086631  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.086803  737494 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-463362"
	I0224 11:49:18.086832  737494 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-463362"
	I0224 11:49:18.087152  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.087280  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.091107  737494 out.go:177] * Verifying Kubernetes components...
	I0224 11:49:18.084291  737494 addons.go:69] Setting cloud-spanner=true in profile "addons-463362"
	I0224 11:49:18.091196  737494 addons.go:238] Setting addon cloud-spanner=true in "addons-463362"
	I0224 11:49:18.091562  737494 addons.go:69] Setting storage-provisioner=true in profile "addons-463362"
	I0224 11:49:18.091627  737494 addons.go:238] Setting addon storage-provisioner=true in "addons-463362"
	I0224 11:49:18.091656  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.084279  737494 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-463362"
	I0224 11:49:18.091719  737494 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-463362"
	I0224 11:49:18.091743  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.084291  737494 addons.go:69] Setting ingress=true in profile "addons-463362"
	I0224 11:49:18.091806  737494 addons.go:238] Setting addon ingress=true in "addons-463362"
	I0224 11:49:18.091877  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.084301  737494 addons.go:69] Setting ingress-dns=true in profile "addons-463362"
	I0224 11:49:18.091968  737494 addons.go:238] Setting addon ingress-dns=true in "addons-463362"
	I0224 11:49:18.092005  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.092298  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.092523  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.092578  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.091629  737494 addons.go:69] Setting volcano=true in profile "addons-463362"
	I0224 11:49:18.092940  737494 addons.go:238] Setting addon volcano=true in "addons-463362"
	I0224 11:49:18.093016  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.091662  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.097014  737494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0224 11:49:18.122583  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.122897  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.124542  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.132657  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.132938  737494 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0224 11:49:18.133123  737494 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I0224 11:49:18.133916  737494 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-463362"
	I0224 11:49:18.133971  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.134420  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.134492  737494 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0224 11:49:18.134527  737494 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0224 11:49:18.134588  737494 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0224 11:49:18.134667  737494 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.37.0
	I0224 11:49:18.134601  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.135395  737494 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0224 11:49:18.135418  737494 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0224 11:49:18.135469  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.135656  737494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0224 11:49:18.136050  737494 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0224 11:49:18.136057  737494 out.go:177]   - Using image docker.io/registry:2.8.3
	I0224 11:49:18.136076  737494 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I0224 11:49:18.136894  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.138241  737494 addons.go:238] Setting addon default-storageclass=true in "addons-463362"
	I0224 11:49:18.138289  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:18.138348  737494 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0224 11:49:18.138372  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0224 11:49:18.138416  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.138434  737494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0224 11:49:18.138729  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:18.143525  737494 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0224 11:49:18.143704  737494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0224 11:49:18.144679  737494 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0224 11:49:18.144700  737494 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0224 11:49:18.144795  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.145795  737494 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0224 11:49:18.147140  737494 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0224 11:49:18.148269  737494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0224 11:49:18.149456  737494 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0224 11:49:18.149507  737494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0224 11:49:18.150610  737494 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0224 11:49:18.150630  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0224 11:49:18.150637  737494 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0224 11:49:18.150682  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.151849  737494 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0224 11:49:18.151873  737494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0224 11:49:18.151948  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.165385  737494 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I0224 11:49:18.167868  737494 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.11.0
	I0224 11:49:18.171396  737494 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.11.0
	I0224 11:49:18.174558  737494 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0224 11:49:18.174580  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0224 11:49:18.174635  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.175563  737494 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.11.0
	I0224 11:49:18.175737  737494 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0224 11:49:18.176952  737494 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0224 11:49:18.176978  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0224 11:49:18.177033  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.200350  737494 addons.go:435] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0224 11:49:18.200397  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (480278 bytes)
	I0224 11:49:18.200477  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.203847  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.204985  737494 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0224 11:49:18.205058  737494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I0224 11:49:18.207517  737494 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 11:49:18.207542  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0224 11:49:18.207615  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.209119  737494 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0224 11:49:18.209134  737494 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0224 11:49:18.209274  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.209374  737494 out.go:177]   - Using image docker.io/busybox:stable
	I0224 11:49:18.210375  737494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0224 11:49:18.212610  737494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0224 11:49:18.214611  737494 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0224 11:49:18.214637  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0224 11:49:18.214699  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.217035  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.217272  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.217538  737494 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.29
	I0224 11:49:18.220948  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.224044  737494 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0224 11:49:18.225367  737494 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0224 11:49:18.225383  737494 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0224 11:49:18.225389  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0224 11:49:18.225401  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0224 11:49:18.225448  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.225450  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:18.226788  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.227882  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.242939  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.248178  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.251532  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.252932  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.255671  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.258060  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.258115  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.266641  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:18.267131  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	W0224 11:49:18.268636  737494 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0224 11:49:18.268667  737494 retry.go:31] will retry after 166.82292ms: ssh: handshake failed: EOF
	W0224 11:49:18.269087  737494 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0224 11:49:18.269110  737494 retry.go:31] will retry after 348.166817ms: ssh: handshake failed: EOF
	W0224 11:49:18.269522  737494 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0224 11:49:18.269543  737494 retry.go:31] will retry after 283.203073ms: ssh: handshake failed: EOF
	W0224 11:49:18.270045  737494 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0224 11:49:18.270064  737494 retry.go:31] will retry after 155.221136ms: ssh: handshake failed: EOF
	W0224 11:49:18.458914  737494 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0224 11:49:18.458952  737494 retry.go:31] will retry after 495.682272ms: ssh: handshake failed: EOF
	I0224 11:49:18.471592  737494 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0224 11:49:18.471685  737494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0224 11:49:18.487927  737494 node_ready.go:35] waiting up to 6m0s for node "addons-463362" to be "Ready" ...
	I0224 11:49:18.490515  737494 node_ready.go:49] node "addons-463362" has status "Ready":"True"
	I0224 11:49:18.490543  737494 node_ready.go:38] duration metric: took 2.582404ms for node "addons-463362" to be "Ready" ...
	I0224 11:49:18.490553  737494 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 11:49:18.493682  737494 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace to be "Ready" ...
	I0224 11:49:18.666974  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0224 11:49:18.667737  737494 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0224 11:49:18.667799  737494 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0224 11:49:18.678443  737494 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0224 11:49:18.678471  737494 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0224 11:49:18.759337  737494 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0224 11:49:18.759427  737494 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0224 11:49:18.760017  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0224 11:49:18.766060  737494 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0224 11:49:18.766083  737494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0224 11:49:18.768101  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0224 11:49:18.771571  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0224 11:49:18.775301  737494 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0224 11:49:18.775327  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14539 bytes)
	I0224 11:49:18.861675  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0224 11:49:18.868892  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0224 11:49:18.870694  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0224 11:49:18.961984  737494 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0224 11:49:18.962082  737494 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0224 11:49:18.979662  737494 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0224 11:49:18.979764  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0224 11:49:18.980711  737494 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0224 11:49:18.980794  737494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0224 11:49:19.058177  737494 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0224 11:49:19.058270  737494 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0224 11:49:19.265008  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0224 11:49:19.371953  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0224 11:49:19.380778  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0224 11:49:19.463449  737494 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0224 11:49:19.463485  737494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0224 11:49:19.465344  737494 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0224 11:49:19.465371  737494 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0224 11:49:19.567677  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0224 11:49:19.870810  737494 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.399084597s)
	I0224 11:49:19.870859  737494 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0224 11:49:19.876261  737494 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0224 11:49:19.876288  737494 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0224 11:49:20.076041  737494 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0224 11:49:20.076098  737494 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0224 11:49:20.273035  737494 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0224 11:49:20.273123  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0224 11:49:20.375233  737494 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-463362" context rescaled to 1 replicas
	I0224 11:49:20.562887  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:20.565983  737494 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0224 11:49:20.566062  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0224 11:49:20.582401  737494 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0224 11:49:20.582451  737494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0224 11:49:20.658923  737494 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0224 11:49:20.658960  737494 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0224 11:49:21.173952  737494 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0224 11:49:21.173997  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0224 11:49:21.360537  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0224 11:49:21.366231  737494 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 11:49:21.366259  737494 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0224 11:49:21.379120  737494 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0224 11:49:21.379200  737494 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0224 11:49:21.668622  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0224 11:49:21.673017  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0224 11:49:21.760661  737494 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0224 11:49:21.760753  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0224 11:49:22.668118  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:22.764385  737494 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0224 11:49:22.764480  737494 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0224 11:49:23.466475  737494 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0224 11:49:23.466598  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0224 11:49:23.872677  737494 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0224 11:49:23.872770  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0224 11:49:24.283818  737494 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0224 11:49:24.283854  737494 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0224 11:49:24.670336  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0224 11:49:24.971848  737494 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0224 11:49:24.971955  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:24.995680  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:25.070611  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:25.685521  737494 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0224 11:49:25.981248  737494 addons.go:238] Setting addon gcp-auth=true in "addons-463362"
	I0224 11:49:25.981322  737494 host.go:66] Checking if "addons-463362" exists ...
	I0224 11:49:25.981873  737494 cli_runner.go:164] Run: docker container inspect addons-463362 --format={{.State.Status}}
	I0224 11:49:25.998393  737494 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0224 11:49:25.998436  737494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-463362
	I0224 11:49:26.015026  737494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/addons-463362/id_rsa Username:docker}
	I0224 11:49:27.566560  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:27.777259  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.110245079s)
	I0224 11:49:27.777304  737494 addons.go:479] Verifying addon ingress=true in "addons-463362"
	I0224 11:49:27.777843  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.01775469s)
	I0224 11:49:27.777902  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.009775637s)
	I0224 11:49:27.777954  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (9.006284967s)
	I0224 11:49:27.777983  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.916278324s)
	I0224 11:49:27.778023  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.909104767s)
	I0224 11:49:27.779200  737494 out.go:177] * Verifying ingress addon...
	I0224 11:49:27.781756  737494 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0224 11:49:27.958391  737494 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0224 11:49:27.958504  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:28.284822  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:28.858492  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:29.361335  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:29.664448  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:29.865232  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:30.362087  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:30.787196  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:30.866404  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.995643539s)
	I0224 11:49:30.866654  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (11.601600421s)
	I0224 11:49:30.866731  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.494720681s)
	I0224 11:49:30.866816  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.48601617s)
	I0224 11:49:30.867142  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.299429035s)
	I0224 11:49:30.867196  737494 addons.go:479] Verifying addon registry=true in "addons-463362"
	I0224 11:49:30.867636  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.507055013s)
	I0224 11:49:30.867815  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.199093136s)
	I0224 11:49:30.868105  737494 addons.go:479] Verifying addon metrics-server=true in "addons-463362"
	I0224 11:49:30.867987  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.194833082s)
	W0224 11:49:30.868250  737494 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0224 11:49:30.868291  737494 retry.go:31] will retry after 180.52081ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0224 11:49:30.869376  737494 out.go:177] * Verifying registry addon...
	I0224 11:49:30.870092  737494 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-463362 service yakd-dashboard -n yakd-dashboard
	
	I0224 11:49:30.871672  737494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0224 11:49:30.962298  737494 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0224 11:49:30.962328  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:31.049668  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0224 11:49:31.363541  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:31.460648  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:31.785068  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:31.885613  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:31.968162  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.297707084s)
	I0224 11:49:31.968212  737494 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-463362"
	I0224 11:49:31.968342  737494 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.969915491s)
	I0224 11:49:31.969899  737494 out.go:177] * Verifying csi-hostpath-driver addon...
	I0224 11:49:31.969978  737494 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I0224 11:49:31.971401  737494 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0224 11:49:31.972206  737494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0224 11:49:31.972636  737494 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0224 11:49:31.972658  737494 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0224 11:49:31.983176  737494 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0224 11:49:31.983202  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:32.059757  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:32.159348  737494 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0224 11:49:32.159425  737494 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0224 11:49:32.182268  737494 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0224 11:49:32.182293  737494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0224 11:49:32.262264  737494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0224 11:49:32.285986  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:32.375172  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:32.477402  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:32.861549  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:32.875581  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:32.977613  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:33.286529  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:33.375507  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.325785928s)
	I0224 11:49:33.387268  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:33.476505  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:33.778266  737494 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.515941203s)
	I0224 11:49:33.779200  737494 addons.go:479] Verifying addon gcp-auth=true in "addons-463362"
	I0224 11:49:33.780445  737494 out.go:177] * Verifying gcp-auth addon...
	I0224 11:49:33.782579  737494 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0224 11:49:33.784700  737494 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0224 11:49:33.785042  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:33.885626  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:33.986401  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:34.286023  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:34.376066  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:34.475602  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:34.499015  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:34.784892  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:34.874749  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:34.976036  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:35.289150  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:35.374670  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:35.476050  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:35.785918  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:35.874474  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:35.975538  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:36.285272  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:36.375495  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:36.476014  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:36.785847  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:36.886307  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:36.976299  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:36.998594  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:37.284863  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:37.374836  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:37.475914  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:37.785401  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:37.874556  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:37.975942  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:38.285948  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:38.375150  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:38.476263  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:38.785233  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:38.875221  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:38.976282  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:38.998994  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:39.285946  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:39.374740  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:39.475896  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:39.785693  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:39.875385  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:39.975175  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:40.286098  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:40.374986  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:40.476174  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:40.785659  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:40.875578  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:40.975444  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:41.285692  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:41.375578  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:41.475750  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:41.499543  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:41.786334  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:41.875274  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:41.975673  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:42.286272  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:42.386711  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:42.475702  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:42.785800  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:42.875902  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:42.976099  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:43.285090  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:43.386173  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:43.486911  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:43.500132  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:43.785342  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:43.875232  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:43.976483  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:44.285892  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:44.375089  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:44.476318  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:44.785269  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:44.875029  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:44.976215  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:45.285710  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:45.375494  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:45.475533  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:45.785188  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:45.875073  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:45.976088  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:45.998224  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:46.285397  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:46.375386  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:46.475834  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:46.785852  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:46.874239  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:46.976508  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:47.286637  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:47.378499  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:47.476497  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:47.785582  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:47.874223  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:47.976119  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:47.998533  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:48.284913  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:48.375563  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:48.475048  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:48.785660  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:48.886138  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:48.976287  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:49.285298  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:49.375219  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:49.476452  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:49.785337  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:49.875353  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:49.975252  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:50.285516  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:50.386131  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:50.475584  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:50.499363  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:50.785476  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:50.875096  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:50.976079  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:51.284988  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:51.375491  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:51.475484  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:51.785633  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:51.875219  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:51.976377  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:52.286092  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:52.385942  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0224 11:49:52.486935  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:52.785328  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:52.875070  737494 kapi.go:107] duration metric: took 22.003394115s to wait for kubernetes.io/minikube-addons=registry ...
	I0224 11:49:52.976017  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:52.998488  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:53.285336  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:53.474945  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:53.785301  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:53.975030  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:54.285013  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:54.476335  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:54.785754  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:54.975701  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:54.999381  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:55.285444  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:55.474832  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:55.784549  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:55.975930  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:56.285231  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:56.475864  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:56.784769  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:56.975550  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:57.284948  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:57.475617  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:57.498437  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:49:57.785330  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:57.975261  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:58.284866  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:58.475243  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:58.785417  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:58.975410  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:59.284914  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:59.475289  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:59.785047  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:49:59.975364  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:49:59.998247  737494 pod_ready.go:103] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"False"
	I0224 11:50:00.285450  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:00.475516  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:00.498956  737494 pod_ready.go:93] pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace has status "Ready":"True"
	I0224 11:50:00.498984  737494 pod_ready.go:82] duration metric: took 42.00528181s for pod "coredns-668d6bf9bc-9zm8b" in "kube-system" namespace to be "Ready" ...
	I0224 11:50:00.498995  737494 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-xtr5v" in "kube-system" namespace to be "Ready" ...
	I0224 11:50:00.500676  737494 pod_ready.go:98] error getting pod "coredns-668d6bf9bc-xtr5v" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-xtr5v" not found
	I0224 11:50:00.500696  737494 pod_ready.go:82] duration metric: took 1.694811ms for pod "coredns-668d6bf9bc-xtr5v" in "kube-system" namespace to be "Ready" ...
	E0224 11:50:00.500706  737494 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-668d6bf9bc-xtr5v" in "kube-system" namespace (skipping!): pods "coredns-668d6bf9bc-xtr5v" not found
	I0224 11:50:00.500712  737494 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-463362" in "kube-system" namespace to be "Ready" ...
	I0224 11:50:00.504029  737494 pod_ready.go:93] pod "etcd-addons-463362" in "kube-system" namespace has status "Ready":"True"
	I0224 11:50:00.504052  737494 pod_ready.go:82] duration metric: took 3.332077ms for pod "etcd-addons-463362" in "kube-system" namespace to be "Ready" ...
	I0224 11:50:00.504063  737494 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-463362" in "kube-system" namespace to be "Ready" ...
	I0224 11:50:00.507251  737494 pod_ready.go:93] pod "kube-apiserver-addons-463362" in "kube-system" namespace has status "Ready":"True"
	I0224 11:50:00.507273  737494 pod_ready.go:82] duration metric: took 3.201782ms for pod "kube-apiserver-addons-463362" in "kube-system" namespace to be "Ready" ...
	I0224 11:50:00.507286  737494 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-463362" in "kube-system" namespace to be "Ready" ...
	I0224 11:50:00.510578  737494 pod_ready.go:93] pod "kube-controller-manager-addons-463362" in "kube-system" namespace has status "Ready":"True"
	I0224 11:50:00.510599  737494 pod_ready.go:82] duration metric: took 3.303356ms for pod "kube-controller-manager-addons-463362" in "kube-system" namespace to be "Ready" ...
	I0224 11:50:00.510611  737494 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-szncl" in "kube-system" namespace to be "Ready" ...
	I0224 11:50:00.698033  737494 pod_ready.go:93] pod "kube-proxy-szncl" in "kube-system" namespace has status "Ready":"True"
	I0224 11:50:00.698058  737494 pod_ready.go:82] duration metric: took 187.43924ms for pod "kube-proxy-szncl" in "kube-system" namespace to be "Ready" ...
	I0224 11:50:00.698067  737494 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-463362" in "kube-system" namespace to be "Ready" ...
	I0224 11:50:00.785236  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:00.975467  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:01.097766  737494 pod_ready.go:93] pod "kube-scheduler-addons-463362" in "kube-system" namespace has status "Ready":"True"
	I0224 11:50:01.097792  737494 pod_ready.go:82] duration metric: took 399.717076ms for pod "kube-scheduler-addons-463362" in "kube-system" namespace to be "Ready" ...
	I0224 11:50:01.097802  737494 pod_ready.go:39] duration metric: took 42.607233513s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0224 11:50:01.097839  737494 api_server.go:52] waiting for apiserver process to appear ...
	I0224 11:50:01.097904  737494 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 11:50:01.112186  737494 api_server.go:72] duration metric: took 43.028210274s to wait for apiserver process to appear ...
	I0224 11:50:01.112208  737494 api_server.go:88] waiting for apiserver healthz status ...
	I0224 11:50:01.112230  737494 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0224 11:50:01.117533  737494 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0224 11:50:01.118349  737494 api_server.go:141] control plane version: v1.32.2
	I0224 11:50:01.118371  737494 api_server.go:131] duration metric: took 6.157475ms to wait for apiserver health ...
	I0224 11:50:01.118379  737494 system_pods.go:43] waiting for kube-system pods to appear ...
	I0224 11:50:01.285224  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:01.298227  737494 system_pods.go:59] 18 kube-system pods found
	I0224 11:50:01.298262  737494 system_pods.go:61] "amd-gpu-device-plugin-g6rln" [c7655b51-9c2e-43b3-a59f-0c24440ec729] Running
	I0224 11:50:01.298267  737494 system_pods.go:61] "coredns-668d6bf9bc-9zm8b" [ea58a21c-6a3f-4fd1-a2a6-aa6ce8a2bf56] Running
	I0224 11:50:01.298274  737494 system_pods.go:61] "csi-hostpath-attacher-0" [3d27c2cb-b3e0-4876-91a0-830fed70498f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0224 11:50:01.298280  737494 system_pods.go:61] "csi-hostpath-resizer-0" [8d6640df-0bb0-4a50-8f93-aee3c32d2fd7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0224 11:50:01.298288  737494 system_pods.go:61] "csi-hostpathplugin-wwzw4" [eef7fbab-db21-456a-acef-480eaaa9043d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0224 11:50:01.298293  737494 system_pods.go:61] "etcd-addons-463362" [f16e4016-d09d-4aad-bf5e-2dc61577db60] Running
	I0224 11:50:01.298297  737494 system_pods.go:61] "kube-apiserver-addons-463362" [875585d7-7c2c-4ada-8b33-4f99b4bbeaa3] Running
	I0224 11:50:01.298302  737494 system_pods.go:61] "kube-controller-manager-addons-463362" [45ffcef4-ccd3-4695-85c2-023457f42f61] Running
	I0224 11:50:01.298310  737494 system_pods.go:61] "kube-ingress-dns-minikube" [fa6a4e8e-5314-4064-8078-c73e5a3295d1] Running
	I0224 11:50:01.298313  737494 system_pods.go:61] "kube-proxy-szncl" [f313eaba-4784-4d33-9082-913ffd82b174] Running
	I0224 11:50:01.298316  737494 system_pods.go:61] "kube-scheduler-addons-463362" [2df96522-f57a-4bc5-9195-3f1fc6ebdfc1] Running
	I0224 11:50:01.298322  737494 system_pods.go:61] "metrics-server-7fbb699795-9ms6n" [d7c51419-cc63-4be6-8bf3-5609655033aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0224 11:50:01.298328  737494 system_pods.go:61] "nvidia-device-plugin-daemonset-k78zm" [1db4514c-2559-4b82-8c66-9fa073a836ff] Running
	I0224 11:50:01.298332  737494 system_pods.go:61] "registry-6c88467877-mkfm8" [997e9970-e3ba-46f8-a564-dca79745389d] Running
	I0224 11:50:01.298335  737494 system_pods.go:61] "registry-proxy-59cfj" [36568ecc-510f-46b5-8192-bc771e49bf12] Running
	I0224 11:50:01.298341  737494 system_pods.go:61] "snapshot-controller-68b874b76f-4c7m6" [3b1ebe36-acca-450d-9066-0dbf40227aab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0224 11:50:01.298349  737494 system_pods.go:61] "snapshot-controller-68b874b76f-4vfmj" [7e2535a7-8620-4259-b68b-c69a9198759f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0224 11:50:01.298355  737494 system_pods.go:61] "storage-provisioner" [de4f4839-3aa4-4a71-a4b2-c9c670738b4e] Running
	I0224 11:50:01.298361  737494 system_pods.go:74] duration metric: took 179.977464ms to wait for pod list to return data ...
	I0224 11:50:01.298369  737494 default_sa.go:34] waiting for default service account to be created ...
	I0224 11:50:01.475813  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:01.497012  737494 default_sa.go:45] found service account: "default"
	I0224 11:50:01.497044  737494 default_sa.go:55] duration metric: took 198.66621ms for default service account to be created ...
	I0224 11:50:01.497054  737494 system_pods.go:116] waiting for k8s-apps to be running ...
	I0224 11:50:01.698707  737494 system_pods.go:86] 18 kube-system pods found
	I0224 11:50:01.698743  737494 system_pods.go:89] "amd-gpu-device-plugin-g6rln" [c7655b51-9c2e-43b3-a59f-0c24440ec729] Running
	I0224 11:50:01.698751  737494 system_pods.go:89] "coredns-668d6bf9bc-9zm8b" [ea58a21c-6a3f-4fd1-a2a6-aa6ce8a2bf56] Running
	I0224 11:50:01.698761  737494 system_pods.go:89] "csi-hostpath-attacher-0" [3d27c2cb-b3e0-4876-91a0-830fed70498f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0224 11:50:01.698769  737494 system_pods.go:89] "csi-hostpath-resizer-0" [8d6640df-0bb0-4a50-8f93-aee3c32d2fd7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0224 11:50:01.698778  737494 system_pods.go:89] "csi-hostpathplugin-wwzw4" [eef7fbab-db21-456a-acef-480eaaa9043d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0224 11:50:01.698784  737494 system_pods.go:89] "etcd-addons-463362" [f16e4016-d09d-4aad-bf5e-2dc61577db60] Running
	I0224 11:50:01.698790  737494 system_pods.go:89] "kube-apiserver-addons-463362" [875585d7-7c2c-4ada-8b33-4f99b4bbeaa3] Running
	I0224 11:50:01.698797  737494 system_pods.go:89] "kube-controller-manager-addons-463362" [45ffcef4-ccd3-4695-85c2-023457f42f61] Running
	I0224 11:50:01.698804  737494 system_pods.go:89] "kube-ingress-dns-minikube" [fa6a4e8e-5314-4064-8078-c73e5a3295d1] Running
	I0224 11:50:01.698810  737494 system_pods.go:89] "kube-proxy-szncl" [f313eaba-4784-4d33-9082-913ffd82b174] Running
	I0224 11:50:01.698829  737494 system_pods.go:89] "kube-scheduler-addons-463362" [2df96522-f57a-4bc5-9195-3f1fc6ebdfc1] Running
	I0224 11:50:01.698841  737494 system_pods.go:89] "metrics-server-7fbb699795-9ms6n" [d7c51419-cc63-4be6-8bf3-5609655033aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0224 11:50:01.698847  737494 system_pods.go:89] "nvidia-device-plugin-daemonset-k78zm" [1db4514c-2559-4b82-8c66-9fa073a836ff] Running
	I0224 11:50:01.698854  737494 system_pods.go:89] "registry-6c88467877-mkfm8" [997e9970-e3ba-46f8-a564-dca79745389d] Running
	I0224 11:50:01.698862  737494 system_pods.go:89] "registry-proxy-59cfj" [36568ecc-510f-46b5-8192-bc771e49bf12] Running
	I0224 11:50:01.698872  737494 system_pods.go:89] "snapshot-controller-68b874b76f-4c7m6" [3b1ebe36-acca-450d-9066-0dbf40227aab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0224 11:50:01.698884  737494 system_pods.go:89] "snapshot-controller-68b874b76f-4vfmj" [7e2535a7-8620-4259-b68b-c69a9198759f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0224 11:50:01.698890  737494 system_pods.go:89] "storage-provisioner" [de4f4839-3aa4-4a71-a4b2-c9c670738b4e] Running
	I0224 11:50:01.698904  737494 system_pods.go:126] duration metric: took 201.842686ms to wait for k8s-apps to be running ...
	I0224 11:50:01.698919  737494 system_svc.go:44] waiting for kubelet service to be running ....
	I0224 11:50:01.698981  737494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 11:50:01.712307  737494 system_svc.go:56] duration metric: took 13.378443ms WaitForService to wait for kubelet
	I0224 11:50:01.712338  737494 kubeadm.go:582] duration metric: took 43.628366738s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0224 11:50:01.712370  737494 node_conditions.go:102] verifying NodePressure condition ...
	I0224 11:50:01.785143  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:01.897374  737494 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0224 11:50:01.897401  737494 node_conditions.go:123] node cpu capacity is 8
	I0224 11:50:01.897414  737494 node_conditions.go:105] duration metric: took 185.038696ms to run NodePressure ...
	I0224 11:50:01.897428  737494 start.go:241] waiting for startup goroutines ...
	I0224 11:50:01.975193  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:02.285436  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:02.475345  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:02.785421  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:02.975187  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:03.284755  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:03.475265  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:03.784738  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:03.975336  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:04.285112  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:04.475297  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:04.785308  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:04.975802  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:05.284562  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:05.476056  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:05.784774  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:05.975188  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:06.284951  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:06.475887  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:06.784911  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:06.974938  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:07.284725  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:07.475273  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:07.784956  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:07.975781  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:08.284575  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:08.474962  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:08.784909  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:08.974996  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:09.284607  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:09.475023  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:09.784833  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:09.975256  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:10.284554  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:10.475093  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:10.784890  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:10.974941  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:11.284613  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:11.475016  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:11.784420  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:11.976062  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:12.284999  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:12.475556  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:12.785212  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:12.975917  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:13.284433  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:13.474940  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:13.784735  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:13.975123  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:14.284800  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:14.475591  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:14.785400  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:14.975278  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:15.284679  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:15.475505  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:15.785442  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:15.975128  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:16.284983  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:16.475487  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:16.785611  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:16.975431  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:17.285156  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:17.475714  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:17.785360  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:17.974735  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:18.284549  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:18.474954  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:18.784994  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:18.975220  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:19.284936  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:19.475516  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:19.784988  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:19.975708  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:20.284553  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:20.474883  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:20.784823  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:20.976001  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:21.284730  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:21.475165  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:21.785054  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:21.975719  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:22.284708  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:22.475867  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:22.784784  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:22.975901  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:23.284991  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:23.475745  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:23.784656  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:23.975646  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:24.285648  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:24.475289  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:24.785411  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:24.975454  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:25.284871  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:25.475071  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:25.784910  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:25.975560  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:26.285361  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:26.475161  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:26.785415  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:26.975394  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:27.285435  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:27.475088  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:27.784695  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:27.974926  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:28.284725  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:28.475299  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:28.785032  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:28.975344  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:29.285228  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:29.475829  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:29.784546  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:29.976147  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:30.284807  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:30.475360  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:30.785132  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:30.975365  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:31.285009  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:31.475698  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:31.785825  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:31.975272  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:32.285706  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:32.475322  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:32.785254  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:32.974928  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:33.284757  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:33.474901  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:33.784579  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:33.975836  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:34.284761  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:34.475726  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:34.785132  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:34.975200  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:35.284799  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:35.475360  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:35.784864  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:35.975535  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:36.285322  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:36.474997  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:36.784772  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:36.975497  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:37.285013  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:37.475641  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:37.785397  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:37.975180  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:38.285105  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:38.475917  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:38.784555  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:38.976007  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:39.285054  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:39.475207  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:39.784971  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:39.975685  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:40.284760  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:40.475763  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:40.784416  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:40.975272  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:41.285152  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:41.475366  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:41.784769  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:41.975027  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:42.285301  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:42.475553  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:42.785964  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:42.975516  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:43.285294  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:43.475057  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:43.785414  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:43.975211  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:44.284982  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:44.475785  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:44.784666  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:44.975785  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:45.284821  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:45.476031  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:45.784664  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:45.975344  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:46.284888  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:46.475534  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:46.787370  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:46.974923  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:47.284714  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:47.475143  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:47.784895  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:47.975182  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:48.284989  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:48.475904  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:48.784671  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:48.975879  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:49.284711  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:49.474854  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:49.784333  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:49.975762  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:50.284743  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:50.475089  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:50.784994  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:50.976259  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:51.285022  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:51.475269  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:51.785089  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:51.975469  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:52.285614  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:52.475295  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:52.785072  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:52.975886  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:53.284531  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:53.475918  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:53.784841  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:53.976220  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:54.285028  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:54.474933  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:54.784871  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:54.976246  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:55.284906  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:55.475522  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:55.785255  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:55.975494  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:56.285453  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:56.475263  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:56.784882  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:56.975509  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:57.285305  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:57.474981  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:57.784832  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:57.974959  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:58.284943  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:58.475226  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:58.784803  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:58.975703  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:59.284560  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:59.477365  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:50:59.784974  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:50:59.975015  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:00.284611  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:00.475245  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:00.784933  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:00.975666  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:01.285359  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:01.475071  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:01.784678  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:01.975281  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:02.285251  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:02.475911  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:02.784527  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:02.975328  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:03.284912  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:03.475534  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:03.785250  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:03.976011  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:04.284918  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:04.475054  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:04.784560  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:04.975873  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:05.286854  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:05.475505  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:05.785300  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:05.974989  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:06.284553  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:06.474989  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:06.784954  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:06.975612  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:07.284554  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:07.475497  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:07.785223  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:07.975899  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:08.284792  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:08.475225  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:08.785107  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:08.976101  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:09.285010  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:09.475112  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:09.784826  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:09.975481  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:10.285331  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:10.475879  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:10.784398  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:10.976064  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:11.284784  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:11.475643  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:11.785475  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:11.975336  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:12.285757  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:12.475647  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:12.785443  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:12.975298  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:13.285341  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:13.475232  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:13.785095  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:13.974932  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:14.285185  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:14.475850  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:14.784573  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:14.975614  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:15.285255  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:15.474798  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:15.784769  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:15.976377  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:16.285199  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:16.476160  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:16.784786  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:16.975737  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:17.284964  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:17.475191  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:17.784861  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:17.975321  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:18.285431  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:18.475236  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:18.785043  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:18.976029  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:19.284689  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:19.475167  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:19.784780  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:19.975097  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:20.284756  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:20.475487  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:20.785294  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:20.975235  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:21.284999  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:21.475332  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:21.785085  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:21.975899  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:22.285386  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:22.475197  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:22.784975  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:22.975771  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:23.284681  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:23.475875  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:23.784693  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:23.975655  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:24.284730  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:24.475787  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:24.784587  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:24.976048  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:25.284970  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:25.475614  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:25.785601  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:25.975287  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:26.285006  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:26.475626  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:26.785120  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:26.976169  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:27.284982  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:27.475346  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:27.784966  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:27.975214  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:28.285005  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:28.475387  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:28.784888  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:28.975488  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:29.285218  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:29.475839  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:29.784292  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:29.975874  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:30.284464  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:30.475527  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:30.784989  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:30.975836  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:31.284480  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:31.475524  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:31.785416  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:31.975187  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:32.285382  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:32.475275  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:32.785035  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:32.975794  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:33.284254  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:33.475937  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:33.784667  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:33.975067  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:34.285058  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:34.475191  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:34.784880  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:34.975889  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:35.284385  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:35.476008  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:35.784570  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:35.975566  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:36.284978  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:36.475540  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:36.785580  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:36.975630  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:37.284506  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:37.475722  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:37.785344  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:37.975167  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:38.285052  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:38.475603  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:38.785387  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:38.975429  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:39.285270  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:39.475713  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:39.784237  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:39.975550  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:40.285550  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:40.475297  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:40.785089  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:40.975166  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:41.284867  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:41.475148  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:41.785193  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:41.976245  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:42.285326  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:42.475885  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:42.784776  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:42.976164  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:43.285465  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:43.475523  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:43.786165  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:43.976460  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:44.285616  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:44.475969  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:44.785428  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:44.975359  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:45.285515  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:45.475608  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:45.784867  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:45.975214  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:46.285431  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:46.475976  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:46.785015  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:46.975266  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:47.285616  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:47.475862  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:47.785213  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:47.976490  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:48.285872  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:48.475732  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:48.786157  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:48.976632  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:49.286270  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:49.476248  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:49.819659  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:49.975120  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:50.285096  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:50.475751  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:50.785594  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:50.975879  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:51.298558  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:51.476488  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:51.787325  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:51.975858  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:52.285340  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:52.476296  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:52.785465  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:52.976565  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:53.285677  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:53.475589  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:53.786335  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:53.976428  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:54.285881  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:54.476085  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:54.795436  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:55.035098  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:55.284880  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:55.475245  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:55.785346  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:55.976394  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:56.284911  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:56.516168  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:56.785006  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:57.075209  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:57.316660  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:57.517315  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:57.785645  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:57.975512  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:58.303170  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:58.503660  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:58.785675  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:58.976054  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:59.285379  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:59.475912  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:51:59.785652  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:51:59.975847  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:52:00.284912  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:00.476541  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:52:00.864062  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:00.976016  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:52:01.284436  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:01.475999  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:52:01.785135  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:01.990227  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:52:02.285308  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:02.476069  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:52:02.784881  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:02.976273  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:52:03.285490  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:03.475667  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:52:03.785674  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:03.975861  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:52:04.289412  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:04.505747  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:52:04.784457  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:04.976081  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0224 11:52:05.285273  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:05.475323  737494 kapi.go:107] duration metric: took 2m33.503114499s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0224 11:52:05.785203  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:06.285508  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:06.785108  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:07.285018  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:07.785790  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:08.285080  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:08.784721  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:09.285475  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:09.785494  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:10.284741  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:10.784673  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:11.285611  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:11.785357  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:12.285405  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:12.784653  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:13.285277  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:13.784553  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:14.285092  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:14.784982  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:15.284944  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:15.785191  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:16.284593  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:16.785479  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:17.285303  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:17.784260  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:18.284855  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:18.784329  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:19.285405  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:19.784666  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:20.285134  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:20.784850  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:21.285457  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:21.784906  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:22.285668  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:22.785062  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:23.285684  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:23.784665  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:24.284781  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:24.785512  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:25.284733  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:25.785378  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:26.284770  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:26.785417  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:27.285240  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:27.784812  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:28.285248  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:28.784552  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:29.285394  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:29.784607  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:30.285053  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:30.785339  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:31.285159  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:31.785826  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:32.285551  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:32.784826  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:33.284615  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:33.785026  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:34.284695  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:34.785557  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:35.285645  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:35.785028  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:36.285562  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:36.785106  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:37.285776  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:37.785316  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:38.285281  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:38.784979  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:39.287214  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:39.784767  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:40.285026  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:40.784669  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:41.284799  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:41.785486  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:42.285118  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:42.784880  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:43.285138  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:43.785052  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:44.285225  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:44.784563  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:45.285047  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:45.784319  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:46.285332  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:46.785108  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:47.284710  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:47.784841  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:48.284994  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:48.784853  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:49.285845  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:49.785283  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:50.284782  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:50.784973  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:51.285225  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:51.784695  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:52.285601  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:52.784973  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:53.285607  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:53.784808  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:54.284633  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:54.785221  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:55.284765  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:55.785372  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:56.284881  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:56.785443  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:57.284822  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:57.785349  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:58.284698  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:58.784570  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:59.284825  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:52:59.785477  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:00.285727  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:00.784688  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:01.284880  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:01.784975  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:02.285710  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:02.784788  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:03.285684  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:03.785457  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:04.284801  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:04.784985  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:05.284791  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:05.785818  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:06.285358  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:06.785012  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:07.285387  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:07.784980  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:08.285191  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:08.785401  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:09.284635  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:09.785000  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:10.285679  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:10.785194  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:11.284909  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:11.785566  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:12.285485  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:12.785054  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:13.284960  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:13.784884  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:14.284912  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:14.784723  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:15.284892  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:15.785198  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:16.285651  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:16.785274  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:17.284735  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:17.785274  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:18.285082  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:18.785691  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:19.284844  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:19.785260  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:20.284865  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:20.785194  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:21.285209  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:21.784997  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:22.285699  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:22.785203  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:23.284835  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:23.785693  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:24.285102  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:24.785078  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:25.284465  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:25.785326  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:26.284867  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:26.785281  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:27.284869  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:27.785788  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:28.285244  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:28.784304  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:29.284707  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:29.784869  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:30.284810  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:30.785464  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:31.284807  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:31.785856  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:32.285452  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:32.784672  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:33.284540  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:33.785329  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:34.284633  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:34.785004  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:35.285326  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:35.785150  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:36.285615  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:36.786884  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:37.285476  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:37.859527  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:38.286412  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:38.785707  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:39.285578  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:39.785761  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:40.285991  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:40.785487  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:41.285026  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:41.785316  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:42.285695  737494 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0224 11:53:42.784784  737494 kapi.go:107] duration metric: took 4m15.003024998s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0224 11:55:01.786187  737494 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0224 11:55:01.786210  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 11:55:02.286022  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 11:55:02.786591  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 11:55:03.285673  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 11:55:03.785703  737494 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0224 11:55:04.285726  737494 kapi.go:107] duration metric: took 5m30.503145643s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0224 11:55:04.287576  737494 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-463362 cluster.
	I0224 11:55:04.288879  737494 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0224 11:55:04.290279  737494 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0224 11:55:04.291671  737494 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, default-storageclass, volcano, inspektor-gadget, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0224 11:55:04.292820  737494 addons.go:514] duration metric: took 5m46.208703779s for enable addons: enabled=[storage-provisioner nvidia-device-plugin amd-gpu-device-plugin ingress-dns default-storageclass volcano inspektor-gadget cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0224 11:55:04.292865  737494 start.go:246] waiting for cluster config update ...
	I0224 11:55:04.292885  737494 start.go:255] writing updated cluster config ...
	I0224 11:55:04.293151  737494 ssh_runner.go:195] Run: rm -f paused
	I0224 11:55:04.345890  737494 start.go:600] kubectl: 1.32.2, cluster: 1.32.2 (minor skew: 0)
	I0224 11:55:04.347393  737494 out.go:177] * Done! kubectl is now configured to use "addons-463362" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 24 11:55:55 addons-463362 dockerd[1328]: time="2025-02-24T11:55:55.113479326Z" level=info msg="ignoring event" container=0e1908699d1eeffbdc3bce0ea73e4c59a3f40e4dd157738f817465dec66dbb18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 11:56:01 addons-463362 cri-dockerd[1592]: time="2025-02-24T11:56:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e063860d3f3e9f586eab6533c1cd2496523db561fc511785e49ed54720ac152c/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Feb 24 11:56:01 addons-463362 dockerd[1328]: time="2025-02-24T11:56:01.257895619Z" level=warning msg="reference for unknown type: " digest="sha256:fcf2f93fc3dfa8c9beb6467502aa9bb6cc9fac4379e0683246475a57bb05daae" remote="ghcr.io/headlamp-k8s/headlamp@sha256:fcf2f93fc3dfa8c9beb6467502aa9bb6cc9fac4379e0683246475a57bb05daae"
	Feb 24 11:56:02 addons-463362 dockerd[1328]: time="2025-02-24T11:56:02.073128600Z" level=info msg="Container failed to exit within 30s of signal 3 - using the force" container=06080e3df616fc93c5dca881152bdc070ed305daf3e116a3fca8eba47c9a0058
	Feb 24 11:56:02 addons-463362 dockerd[1328]: time="2025-02-24T11:56:02.091994711Z" level=info msg="ignoring event" container=06080e3df616fc93c5dca881152bdc070ed305daf3e116a3fca8eba47c9a0058 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 11:56:02 addons-463362 dockerd[1328]: time="2025-02-24T11:56:02.220430457Z" level=info msg="ignoring event" container=5a1ddefbd148024182947eb7c5b7c16a84ef1b9fd9f5aa1c83b1828bc3f3405c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 11:56:06 addons-463362 dockerd[1328]: time="2025-02-24T11:56:06.576773129Z" level=info msg="ignoring event" container=41b0928007adf4de909edd6f3fd7c9aa96c8e8640f32725952c9c75c20af26b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 11:56:06 addons-463362 dockerd[1328]: time="2025-02-24T11:56:06.586348443Z" level=info msg="ignoring event" container=db39f36c3af9e8ec986c1345b0e7fd61cf349a25e6e6f4b8c33c3e5de2771c9a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 11:56:06 addons-463362 dockerd[1328]: time="2025-02-24T11:56:06.776363935Z" level=info msg="ignoring event" container=e9eea7425bcd97d1843ab0e6b69bd050fd8ee06034bd1a257aac20bfc8d2c6da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 11:56:06 addons-463362 dockerd[1328]: time="2025-02-24T11:56:06.788514201Z" level=info msg="ignoring event" container=eab6e55c9fa2cd22cefc8bd91ca46dc59a0af5195e4af66c91c34d83d0dc0c6f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 11:56:10 addons-463362 cri-dockerd[1592]: time="2025-02-24T11:56:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/23216f5e1971438735cfd04d0d42d461c1dd1f53282cfb8153197d24107fb534/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Feb 24 11:56:11 addons-463362 cri-dockerd[1592]: time="2025-02-24T11:56:11Z" level=info msg="Pulling image ghcr.io/headlamp-k8s/headlamp:v0.28.0@sha256:fcf2f93fc3dfa8c9beb6467502aa9bb6cc9fac4379e0683246475a57bb05daae: 3de4c16fff26: Download complete "
	Feb 24 11:56:12 addons-463362 dockerd[1328]: time="2025-02-24T11:56:12.935309610Z" level=info msg="ignoring event" container=59bb06f2e0cbbfbb139d0ba2deec0d6ff4f38c84be7f02c4f209f9f221887824 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 11:56:13 addons-463362 dockerd[1328]: time="2025-02-24T11:56:13.087785182Z" level=info msg="ignoring event" container=dcb43f312baa47ff08c205552d5444cdd9ab108565115c786c71281ea9fdc337 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 11:56:13 addons-463362 dockerd[1328]: time="2025-02-24T11:56:13.104245153Z" level=info msg="ignoring event" container=d422d0048f9e146d8785c27186faf57cabd4f7ea0722936a4a3c978fe0873fb8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 11:56:13 addons-463362 dockerd[1328]: time="2025-02-24T11:56:13.231044419Z" level=info msg="ignoring event" container=8093535dc80f2f2762f28fea7ed5e707c0ed69c3bf943ee3e66bda7e17c57c31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 11:56:20 addons-463362 cri-dockerd[1592]: time="2025-02-24T11:56:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c15bb8c810f621362a780af6af587cece377692d5b91715791fdcd44e604ea6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Feb 24 11:56:21 addons-463362 cri-dockerd[1592]: time="2025-02-24T11:56:21Z" level=info msg="Pulling image ghcr.io/headlamp-k8s/headlamp:v0.28.0@sha256:fcf2f93fc3dfa8c9beb6467502aa9bb6cc9fac4379e0683246475a57bb05daae: 3de4c16fff26: Pull complete "
	Feb 24 11:56:23 addons-463362 dockerd[1328]: time="2025-02-24T11:56:23.753625667Z" level=info msg="ignoring event" container=9ebcf363974d976373c026ffc53ab723d3713f8cd686f900ca5b5e005cabe6c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 11:56:23 addons-463362 dockerd[1328]: time="2025-02-24T11:56:23.871667225Z" level=info msg="ignoring event" container=027755f5b8dfa1fd2deaa9ed5d27c252c42d0235b98c437df552a49ffb6811fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 24 11:56:29 addons-463362 cri-dockerd[1592]: time="2025-02-24T11:56:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/87c79c034e5bac8b700d8930958948ded3734e10acdac408edb46c050cb9b47b/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Feb 24 11:56:31 addons-463362 cri-dockerd[1592]: time="2025-02-24T11:56:31Z" level=info msg="Pulling image ghcr.io/headlamp-k8s/headlamp:v0.28.0@sha256:fcf2f93fc3dfa8c9beb6467502aa9bb6cc9fac4379e0683246475a57bb05daae: 3de4c16fff26: Pull complete "
	Feb 24 11:56:41 addons-463362 cri-dockerd[1592]: time="2025-02-24T11:56:41Z" level=info msg="Pulling image ghcr.io/headlamp-k8s/headlamp:v0.28.0@sha256:fcf2f93fc3dfa8c9beb6467502aa9bb6cc9fac4379e0683246475a57bb05daae: 3de4c16fff26: Pull complete "
	Feb 24 11:56:51 addons-463362 cri-dockerd[1592]: time="2025-02-24T11:56:51Z" level=info msg="Pulling image ghcr.io/headlamp-k8s/headlamp:v0.28.0@sha256:fcf2f93fc3dfa8c9beb6467502aa9bb6cc9fac4379e0683246475a57bb05daae: 3de4c16fff26: Pull complete "
	Feb 24 11:57:01 addons-463362 cri-dockerd[1592]: time="2025-02-24T11:57:01Z" level=info msg="Pulling image ghcr.io/headlamp-k8s/headlamp:v0.28.0@sha256:fcf2f93fc3dfa8c9beb6467502aa9bb6cc9fac4379e0683246475a57bb05daae: 3de4c16fff26: Pull complete "
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	c013a32af1ed7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          About a minute ago   Running             busybox                                  0                   7eb17765748a8       busybox
	7fc0fdfc33039       registry.k8s.io/ingress-nginx/controller@sha256:d56f135b6462cfc476447cfe564b83a45e8bb7da2774963b00d12161112270b7                             3 minutes ago        Running             controller                               0                   a19c83ca8172f       ingress-nginx-controller-56d7c84fd4-9b54q
	a8933f5cd2dfb       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          5 minutes ago        Running             csi-snapshotter                          0                   e07b7fc60b99b       csi-hostpathplugin-wwzw4
	29a9b1c886529       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          5 minutes ago        Running             csi-provisioner                          0                   e07b7fc60b99b       csi-hostpathplugin-wwzw4
	e3083791c4530       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            5 minutes ago        Running             liveness-probe                           0                   e07b7fc60b99b       csi-hostpathplugin-wwzw4
	677a5bb3c7e93       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           5 minutes ago        Running             hostpath                                 0                   e07b7fc60b99b       csi-hostpathplugin-wwzw4
	948042ffaafa6       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                5 minutes ago        Running             node-driver-registrar                    0                   e07b7fc60b99b       csi-hostpathplugin-wwzw4
	a98b7968eca9d       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              5 minutes ago        Running             csi-resizer                              0                   6246b153641eb       csi-hostpath-resizer-0
	43d5c1780b51b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   5 minutes ago        Running             csi-external-health-monitor-controller   0                   e07b7fc60b99b       csi-hostpathplugin-wwzw4
	15d7b3fc3686e       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             5 minutes ago        Running             csi-attacher                             0                   25aa64746ba9c       csi-hostpath-attacher-0
	941884959aed3       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      5 minutes ago        Running             volume-snapshot-controller               0                   8c7a6bc97519e       snapshot-controller-68b874b76f-4vfmj
	f9a2820e10b37       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      5 minutes ago        Running             volume-snapshot-controller               0                   da13a7d362e9d       snapshot-controller-68b874b76f-4c7m6
	529d56946c111       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   5 minutes ago        Exited              patch                                    0                   491679e1f9bba       ingress-nginx-admission-patch-9zqk2
	0de9fe64e4d91       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   5 minutes ago        Exited              create                                   0                   9993b75fde114       ingress-nginx-admission-create-bccdx
	2865637e21261       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       7 minutes ago        Running             local-path-provisioner                   0                   e31b973680d0e       local-path-provisioner-76f89f99b5-cbr69
	95b30f9540715       gcr.io/k8s-minikube/kube-registry-proxy@sha256:60ab3508367ad093b4b891231572577371a29f838d61e64d7f7d093d961c862c                              7 minutes ago        Running             registry-proxy                           0                   8113be69de224       registry-proxy-59cfj
	e2c4b0cd17363       gcr.io/cloud-spanner-emulator/emulator@sha256:b6173c36a0f470e79bb8bd7f3d26b1e809d10cbd2a7592caa4dea323c55ad0b1                               7 minutes ago        Running             cloud-spanner-emulator                   0                   1fb4da022c607       cloud-spanner-emulator-754dc876cd-79v72
	d8b48594c8c5e       registry@sha256:319881be2ee9e345d5837d15842a04268de6a139e23be42654fc7664fc6eaf52                                                             7 minutes ago        Running             registry                                 0                   ede74767d2fca       registry-6c88467877-mkfm8
	f1554ea36b264       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             7 minutes ago        Running             minikube-ingress-dns                     0                   54d1ba578eee5       kube-ingress-dns-minikube
	6a352428a0bb8       6e38f40d628db                                                                                                                                7 minutes ago        Running             storage-provisioner                      0                   f770cfd9dcfa2       storage-provisioner
	4f4c7f11bd754       c69fa2e9cbf5f                                                                                                                                7 minutes ago        Running             coredns                                  0                   1928380c1f3aa       coredns-668d6bf9bc-9zm8b
	496386dab254a       f1332858868e1                                                                                                                                7 minutes ago        Running             kube-proxy                               0                   1a6844315aae8       kube-proxy-szncl
	62c4c2001abef       85b7a174738ba                                                                                                                                8 minutes ago        Running             kube-apiserver                           0                   5904e27bbf4d0       kube-apiserver-addons-463362
	4521e964850bf       d8e673e7c9983                                                                                                                                8 minutes ago        Running             kube-scheduler                           0                   a414f539eabcb       kube-scheduler-addons-463362
	17eae17990b1d       b6a454c5a800d                                                                                                                                8 minutes ago        Running             kube-controller-manager                  0                   6c3d902ab02b5       kube-controller-manager-addons-463362
	4d822e24e5498       a9e7e6b294baf                                                                                                                                8 minutes ago        Running             etcd                                     0                   ac4cd1ad2cd28       etcd-addons-463362
	
	
	==> controller_ingress [7fc0fdfc3303] <==
	W0224 11:53:42.560266       6 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0224 11:53:42.560447       6 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0224 11:53:42.566710       6 main.go:248] "Running in Kubernetes cluster" major="1" minor="32" git="v1.32.2" state="clean" commit="67a30c0adcf52bd3f56ff0893ce19966be12991f" platform="linux/amd64"
	I0224 11:53:42.761004       6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0224 11:53:42.781776       6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0224 11:53:42.789731       6 nginx.go:271] "Starting NGINX Ingress controller"
	I0224 11:53:42.795198       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"547cb3a4-efcb-42f7-9429-fa76899f849c", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0224 11:53:42.798786       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"6b753bef-ac0e-40c9-9879-45bcd086f425", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0224 11:53:42.798824       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"ae329b94-29b4-4910-8996-bcda3afd90a4", APIVersion:"v1", ResourceVersion:"624", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0224 11:53:43.991809       6 nginx.go:317] "Starting NGINX process"
	I0224 11:53:43.991912       6 leaderelection.go:254] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0224 11:53:43.992187       6 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0224 11:53:43.992439       6 controller.go:193] "Configuration changes detected, backend reload required"
	I0224 11:53:44.000582       6 leaderelection.go:268] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0224 11:53:44.000685       6 status.go:85] "New leader elected" identity="ingress-nginx-controller-56d7c84fd4-9b54q"
	I0224 11:53:44.004356       6 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-56d7c84fd4-9b54q" node="addons-463362"
	I0224 11:53:44.030543       6 controller.go:213] "Backend successfully reloaded"
	I0224 11:53:44.030627       6 controller.go:224] "Initial sync, sleeping for 1 second"
	I0224 11:53:44.030693       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-56d7c84fd4-9b54q", UID:"abee9660-e5ec-4f2b-b0f0-14ea1ca6a0a4", APIVersion:"v1", ResourceVersion:"1476", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         0106de65cfccb74405a6dfa7d9daffc6f0a6ef1a
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [4f4c7f11bd75] <==
	[INFO] 10.244.0.7:55496 - 5051 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000150108s
	[INFO] 10.244.0.7:45823 - 24287 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114887s
	[INFO] 10.244.0.7:45823 - 23830 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000207241s
	[INFO] 10.244.0.7:56730 - 63169 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.004238348s
	[INFO] 10.244.0.7:56730 - 63526 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.004528248s
	[INFO] 10.244.0.7:34007 - 34672 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004251683s
	[INFO] 10.244.0.7:34007 - 35030 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004662796s
	[INFO] 10.244.0.7:37038 - 32353 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003973726s
	[INFO] 10.244.0.7:37038 - 32061 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005508124s
	[INFO] 10.244.0.7:51430 - 14101 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156702s
	[INFO] 10.244.0.7:51430 - 13825 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000189237s
	[INFO] 10.244.0.27:43707 - 5293 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000345924s
	[INFO] 10.244.0.27:34025 - 4638 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000474409s
	[INFO] 10.244.0.27:58113 - 40024 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000141669s
	[INFO] 10.244.0.27:50566 - 22385 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000186053s
	[INFO] 10.244.0.27:34897 - 30423 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000113834s
	[INFO] 10.244.0.27:47519 - 17809 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000150737s
	[INFO] 10.244.0.27:47763 - 29509 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006981122s
	[INFO] 10.244.0.27:52116 - 55929 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.011412246s
	[INFO] 10.244.0.27:43687 - 20843 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008303795s
	[INFO] 10.244.0.27:41435 - 16384 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.016489948s
	[INFO] 10.244.0.27:59784 - 17704 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004341553s
	[INFO] 10.244.0.27:47583 - 48649 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004591817s
	[INFO] 10.244.0.27:60797 - 15701 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001901946s
	[INFO] 10.244.0.27:42450 - 39751 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002089676s
	
	
	==> describe nodes <==
	Name:               addons-463362
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-463362
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76650f53499dbb51707efa4a87e94b72d747650
	                    minikube.k8s.io/name=addons-463362
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_24T11_49_13_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-463362
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-463362"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Feb 2025 11:49:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-463362
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Feb 2025 11:57:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Feb 2025 11:56:21 +0000   Mon, 24 Feb 2025 11:49:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Feb 2025 11:56:21 +0000   Mon, 24 Feb 2025 11:49:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Feb 2025 11:56:21 +0000   Mon, 24 Feb 2025 11:49:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Feb 2025 11:56:21 +0000   Mon, 24 Feb 2025 11:49:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-463362
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859364Ki
	  pods:               110
	System Info:
	  Machine ID:                 612231ec20f7486499b77a55f489bc05
	  System UUID:                bf9a1270-b301-40ff-93b2-d6c1037b4f82
	  Boot ID:                    88b24366-0648-497d-a9e3-f91f35efb833
	  Kernel Version:             5.15.0-1075-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.0.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (23 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  default                     cloud-spanner-emulator-754dc876cd-79v72                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  default                     registry-test                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  headlamp                    headlamp-5d4b5d7bd6-psnsn                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  ingress-nginx               ingress-nginx-controller-56d7c84fd4-9b54q                     100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         7m44s
	  kube-system                 coredns-668d6bf9bc-9zm8b                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m53s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m40s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m40s
	  kube-system                 csi-hostpathplugin-wwzw4                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m40s
	  kube-system                 etcd-addons-463362                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m59s
	  kube-system                 kube-apiserver-addons-463362                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 kube-controller-manager-addons-463362                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  kube-system                 kube-proxy-szncl                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m54s
	  kube-system                 kube-scheduler-addons-463362                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 registry-6c88467877-mkfm8                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  kube-system                 registry-proxy-59cfj                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  kube-system                 snapshot-controller-68b874b76f-4c7m6                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 snapshot-controller-68b874b76f-4vfmj                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m43s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  local-path-storage          helper-pod-create-pvc-251f9e44-6d35-49d7-bc1d-86c4bd59e9c9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  local-path-storage          local-path-provisioner-76f89f99b5-cbr69                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             260Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 7m48s  kube-proxy       
	  Normal   Starting                 7m59s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m59s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  7m59s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m59s  kubelet          Node addons-463362 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m59s  kubelet          Node addons-463362 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m59s  kubelet          Node addons-463362 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m54s  node-controller  Node addons-463362 event: Registered Node addons-463362 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 be a3 7b b0 40 08 06
	[  +2.688495] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a2 0b b0 fa 80 5c 08 06
	[  +0.993261] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 36 0e 86 c6 d6 08 06
	[  +1.480246] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000022] ll header: 00000000: ff ff ff ff ff ff 06 08 47 14 cb 72 08 06
	[  +4.992280] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 66 ad 43 a6 ef 83 08 06
	[  +0.101463] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 4f 20 65 da cb 08 06
	[  +0.607937] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa 33 f7 e6 84 86 08 06
	[Feb24 11:53] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7e ef 13 15 f7 3b 08 06
	[  +0.000582] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 4e 41 6c 58 0b 08 06
	[Feb24 11:54] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 9f 51 f6 7d cb 08 06
	[  +0.009011] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de d8 78 cb 22 9f 08 06
	[ +26.176078] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea bb cc e9 f3 c1 08 06
	[  +0.000529] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 72 4e a7 be 59 86 08 06
	
	
	==> etcd [4d822e24e549] <==
	{"level":"info","ts":"2025-02-24T11:49:08.286117Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2025-02-24T11:49:08.286192Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-24T11:49:08.286210Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-02-24T11:49:08.286370Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-02-24T11:49:08.286414Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-24T11:49:08.974451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2025-02-24T11:49:08.974506Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2025-02-24T11:49:08.974526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2025-02-24T11:49:08.974562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2025-02-24T11:49:08.974584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-24T11:49:08.974600Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2025-02-24T11:49:08.974612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2025-02-24T11:49:08.975393Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-24T11:49:08.976014Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T11:49:08.976017Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-463362 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-24T11:49:08.976038Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-24T11:49:08.976223Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-24T11:49:08.976245Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-24T11:49:08.976357Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-24T11:49:08.976436Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-24T11:49:08.976461Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-24T11:49:08.976911Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-24T11:49:08.976943Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-24T11:49:08.977667Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2025-02-24T11:49:08.977804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:57:11 up 18:39,  0 users,  load average: 0.20, 0.86, 0.97
	Linux addons-463362 5.15.0-1075-gcp #84~20.04.1-Ubuntu SMP Thu Jan 16 20:44:47 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [62c4c2001abe] <==
	I0224 11:55:19.648982       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0224 11:55:19.665860       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0224 11:55:31.386024       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0224 11:55:31.458983       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0224 11:55:31.862381       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0224 11:55:31.962133       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	E0224 11:55:31.978934       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-scheduler\" not found]"
	E0224 11:55:32.076196       1 authentication.go:74] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"volcano-scheduler\" not found]"
	I0224 11:55:32.078252       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0224 11:55:32.468651       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0224 11:55:32.562172       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0224 11:55:32.576449       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	I0224 11:55:32.582630       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0224 11:55:33.078247       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0224 11:55:33.079401       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0224 11:55:33.182700       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0224 11:55:33.298015       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0224 11:55:33.583159       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0224 11:55:34.076500       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0224 11:55:51.147593       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35090: use of closed network connection
	E0224 11:55:51.323708       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35138: use of closed network connection
	I0224 11:56:00.710540       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.131.70"}
	I0224 11:56:23.432746       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0224 11:56:24.448382       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0224 11:56:48.650611       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [17eae17990b1] <==
	E0224 11:56:47.960027       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0224 11:56:48.538370       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0224 11:56:48.538408       1 shared_informer.go:320] Caches are synced for resource quota
	I0224 11:56:48.651666       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0224 11:56:48.651711       1 shared_informer.go:320] Caches are synced for garbage collector
	W0224 11:56:52.678461       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 11:56:52.679385       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="flow.volcano.sh/v1alpha1, Resource=jobtemplates"
	W0224 11:56:52.680198       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 11:56:52.680237       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0224 11:56:52.989427       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 11:56:52.990423       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="batch.volcano.sh/v1alpha1, Resource=jobs"
	W0224 11:56:52.991188       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 11:56:52.991226       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0224 11:56:54.490780       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 11:56:54.491741       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="gadget.kinvolk.io/v1alpha1, Resource=traces"
	W0224 11:56:54.492488       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 11:56:54.492517       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0224 11:56:54.933694       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 11:56:54.934698       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="flow.volcano.sh/v1alpha1, Resource=jobflows"
	W0224 11:56:54.935474       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 11:56:54.935512       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0224 11:57:01.642847       1 reflector.go:362] The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking, err = the server could not find the requested resource
	E0224 11:57:01.643963       1 metadata.go:231] "The watchlist request ended with an error, falling back to the standard LIST semantics" err="the server could not find the requested resource" resource="nodeinfo.volcano.sh/v1alpha1, Resource=numatopologies"
	W0224 11:57:01.644802       1 reflector.go:569] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0224 11:57:01.644839       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [496386dab254] <==
	I0224 11:49:21.274032       1 server_linux.go:66] "Using iptables proxy"
	I0224 11:49:21.964046       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0224 11:49:21.964146       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0224 11:49:22.769454       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0224 11:49:22.769576       1 server_linux.go:170] "Using iptables Proxier"
	I0224 11:49:22.857580       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0224 11:49:22.866054       1 server.go:497] "Version info" version="v1.32.2"
	I0224 11:49:22.866088       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0224 11:49:22.870748       1 config.go:199] "Starting service config controller"
	I0224 11:49:22.870805       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0224 11:49:22.870838       1 config.go:105] "Starting endpoint slice config controller"
	I0224 11:49:22.870844       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0224 11:49:22.873100       1 config.go:329] "Starting node config controller"
	I0224 11:49:22.873113       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0224 11:49:22.972579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0224 11:49:22.974119       1 shared_informer.go:320] Caches are synced for service config
	I0224 11:49:22.979725       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4521e964850b] <==
	E0224 11:49:10.671983       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0224 11:49:10.672005       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0224 11:49:10.672038       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 11:49:10.672061       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0224 11:49:10.672084       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0224 11:49:10.672116       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0224 11:49:10.672135       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0224 11:49:10.672239       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 11:49:10.671612       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0224 11:49:10.672401       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 11:49:11.484247       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0224 11:49:11.484289       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 11:49:11.495660       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0224 11:49:11.495695       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 11:49:11.534305       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0224 11:49:11.534351       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0224 11:49:11.682487       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0224 11:49:11.682539       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0224 11:49:11.688915       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0224 11:49:11.688950       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 11:49:11.716211       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0224 11:49:11.716254       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0224 11:49:11.777780       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0224 11:49:11.777816       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0224 11:49:12.167010       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.934473    2446 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-proc" (OuterVolumeSpecName: "proc") pod "7fa78e65-ce6a-47e6-8f91-9b4b6604d84d" (UID: "7fa78e65-ce6a-47e6-8f91-9b4b6604d84d"). InnerVolumeSpecName "proc". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.934617    2446 reconciler_common.go:299] "Volume detached for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-proc\") on node \"addons-463362\" DevicePath \"\""
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.934633    2446 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-oci" (OuterVolumeSpecName: "oci") pod "7fa78e65-ce6a-47e6-8f91-9b4b6604d84d" (UID: "7fa78e65-ce6a-47e6-8f91-9b4b6604d84d"). InnerVolumeSpecName "oci". PluginName "kubernetes.io/empty-dir", VolumeGIDValue ""
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.934631    2446 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-config" (OuterVolumeSpecName: "config") pod "7fa78e65-ce6a-47e6-8f91-9b4b6604d84d" (UID: "7fa78e65-ce6a-47e6-8f91-9b4b6604d84d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.934654    2446 reconciler_common.go:299] "Volume detached for volume \"bin\" (UniqueName: \"kubernetes.io/host-path/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-bin\") on node \"addons-463362\" DevicePath \"\""
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.934667    2446 reconciler_common.go:299] "Volume detached for volume \"var\" (UniqueName: \"kubernetes.io/host-path/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-var\") on node \"addons-463362\" DevicePath \"\""
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.934678    2446 reconciler_common.go:299] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-bpffs\") on node \"addons-463362\" DevicePath \"\""
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.934689    2446 reconciler_common.go:299] "Volume detached for volume \"etc\" (UniqueName: \"kubernetes.io/host-path/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-etc\") on node \"addons-463362\" DevicePath \"\""
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.934699    2446 reconciler_common.go:299] "Volume detached for volume \"usr\" (UniqueName: \"kubernetes.io/host-path/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-usr\") on node \"addons-463362\" DevicePath \"\""
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.934707    2446 reconciler_common.go:299] "Volume detached for volume \"opt\" (UniqueName: \"kubernetes.io/host-path/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-opt\") on node \"addons-463362\" DevicePath \"\""
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.934713    2446 reconciler_common.go:299] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-run\") on node \"addons-463362\" DevicePath \"\""
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.934720    2446 reconciler_common.go:299] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-debugfs\") on node \"addons-463362\" DevicePath \"\""
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.934729    2446 reconciler_common.go:299] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-cgroup\") on node \"addons-463362\" DevicePath \"\""
	Feb 24 11:56:23 addons-463362 kubelet[2446]: I0224 11:56:23.936088    2446 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-kube-api-access-p2765" (OuterVolumeSpecName: "kube-api-access-p2765") pod "7fa78e65-ce6a-47e6-8f91-9b4b6604d84d" (UID: "7fa78e65-ce6a-47e6-8f91-9b4b6604d84d"). InnerVolumeSpecName "kube-api-access-p2765". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Feb 24 11:56:24 addons-463362 kubelet[2446]: I0224 11:56:24.035553    2446 reconciler_common.go:299] "Volume detached for volume \"oci\" (UniqueName: \"kubernetes.io/empty-dir/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-oci\") on node \"addons-463362\" DevicePath \"\""
	Feb 24 11:56:24 addons-463362 kubelet[2446]: I0224 11:56:24.035601    2446 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p2765\" (UniqueName: \"kubernetes.io/projected/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-kube-api-access-p2765\") on node \"addons-463362\" DevicePath \"\""
	Feb 24 11:56:24 addons-463362 kubelet[2446]: I0224 11:56:24.035613    2446 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d-config\") on node \"addons-463362\" DevicePath \"\""
	Feb 24 11:56:24 addons-463362 kubelet[2446]: I0224 11:56:24.777745    2446 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa78e65-ce6a-47e6-8f91-9b4b6604d84d" path="/var/lib/kubelet/pods/7fa78e65-ce6a-47e6-8f91-9b4b6604d84d/volumes"
	Feb 24 11:56:28 addons-463362 kubelet[2446]: I0224 11:56:28.897700    2446 memory_manager.go:355] "RemoveStaleState removing state" podUID="7fa78e65-ce6a-47e6-8f91-9b4b6604d84d" containerName="gadget"
	Feb 24 11:56:28 addons-463362 kubelet[2446]: I0224 11:56:28.981147    2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f9f8fb9d-4239-480f-83c1-f60f35042b64-data\") pod \"helper-pod-create-pvc-251f9e44-6d35-49d7-bc1d-86c4bd59e9c9\" (UID: \"f9f8fb9d-4239-480f-83c1-f60f35042b64\") " pod="local-path-storage/helper-pod-create-pvc-251f9e44-6d35-49d7-bc1d-86c4bd59e9c9"
	Feb 24 11:56:28 addons-463362 kubelet[2446]: I0224 11:56:28.981227    2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f9f8fb9d-4239-480f-83c1-f60f35042b64-script\") pod \"helper-pod-create-pvc-251f9e44-6d35-49d7-bc1d-86c4bd59e9c9\" (UID: \"f9f8fb9d-4239-480f-83c1-f60f35042b64\") " pod="local-path-storage/helper-pod-create-pvc-251f9e44-6d35-49d7-bc1d-86c4bd59e9c9"
	Feb 24 11:56:28 addons-463362 kubelet[2446]: I0224 11:56:28.981261    2446 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvx6t\" (UniqueName: \"kubernetes.io/projected/f9f8fb9d-4239-480f-83c1-f60f35042b64-kube-api-access-vvx6t\") pod \"helper-pod-create-pvc-251f9e44-6d35-49d7-bc1d-86c4bd59e9c9\" (UID: \"f9f8fb9d-4239-480f-83c1-f60f35042b64\") " pod="local-path-storage/helper-pod-create-pvc-251f9e44-6d35-49d7-bc1d-86c4bd59e9c9"
	Feb 24 11:56:40 addons-463362 kubelet[2446]: I0224 11:56:40.767262    2446 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-59cfj" secret="" err="secret \"gcp-auth\" not found"
	Feb 24 11:56:59 addons-463362 kubelet[2446]: I0224 11:56:59.767086    2446 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Feb 24 11:57:09 addons-463362 kubelet[2446]: I0224 11:57:09.767200    2446 kubelet_pods.go:1021] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-754dc876cd-79v72" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [6a352428a0bb] <==
	I0224 11:49:26.072759       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0224 11:49:26.171721       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0224 11:49:26.171779       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0224 11:49:26.267269       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0224 11:49:26.269012       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-463362_532ea6f2-6502-4ef0-94dd-77a88873bcec!
	I0224 11:49:26.267420       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3972d92c-821e-4b21-b413-1fb25570f542", APIVersion:"v1", ResourceVersion:"552", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-463362_532ea6f2-6502-4ef0-94dd-77a88873bcec became leader
	I0224 11:49:26.369277       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-463362_532ea6f2-6502-4ef0-94dd-77a88873bcec!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-463362 -n addons-463362
helpers_test.go:261: (dbg) Run:  kubectl --context addons-463362 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: registry-test task-pv-pod test-local-path headlamp-5d4b5d7bd6-psnsn ingress-nginx-admission-create-bccdx ingress-nginx-admission-patch-9zqk2 helper-pod-create-pvc-251f9e44-6d35-49d7-bc1d-86c4bd59e9c9
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-463362 describe pod registry-test task-pv-pod test-local-path headlamp-5d4b5d7bd6-psnsn ingress-nginx-admission-create-bccdx ingress-nginx-admission-patch-9zqk2 helper-pod-create-pvc-251f9e44-6d35-49d7-bc1d-86c4bd59e9c9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-463362 describe pod registry-test task-pv-pod test-local-path headlamp-5d4b5d7bd6-psnsn ingress-nginx-admission-create-bccdx ingress-nginx-admission-patch-9zqk2 helper-pod-create-pvc-251f9e44-6d35-49d7-bc1d-86c4bd59e9c9: exit status 1 (78.951054ms)

                                                
                                                
-- stdout --
	Name:                      registry-test
	Namespace:                 default
	Priority:                  0
	Service Account:           default
	Node:                      addons-463362/192.168.49.2
	Start Time:                Mon, 24 Feb 2025 11:56:10 +0000
	Labels:                    run=registry-test
	Annotations:               <none>
	Status:                    Terminating (lasts <invalid>)
	Termination Grace Period:  30s
	IP:                        
	IPs:                       <none>
	Containers:
	  registry-test:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Args:
	      sh
	      -c
	      wget --spider -S http://registry.kube-system.svc.cluster.local
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sg9g6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sg9g6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  62s   default-scheduler  Successfully assigned default/registry-test to addons-463362
	  Normal  Pulling    62s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-463362/192.168.49.2
	Start Time:       Mon, 24 Feb 2025 11:56:19 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tgs8h (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-tgs8h:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  53s   default-scheduler  Successfully assigned default/task-pv-pod to addons-463362
	  Normal  Pulling    52s   kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zfhd9 (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-zfhd9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "headlamp-5d4b5d7bd6-psnsn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-bccdx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9zqk2" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-251f9e44-6d35-49d7-bc1d-86c4bd59e9c9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-463362 describe pod registry-test task-pv-pod test-local-path headlamp-5d4b5d7bd6-psnsn ingress-nginx-admission-create-bccdx ingress-nginx-admission-patch-9zqk2 helper-pod-create-pvc-251f9e44-6d35-49d7-bc1d-86c4bd59e9c9: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable registry --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/Registry (72.51s)

                                                
                                    

Test pass (323/346)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 4.32
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.32.2/json-events 3.5
13 TestDownloadOnly/v1.32.2/preload-exists 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.06
18 TestDownloadOnly/v1.32.2/DeleteAll 0.2
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.02
21 TestBinaryMirror 0.76
22 TestOffline 46.03
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 392.16
29 TestAddons/serial/Volcano 37.38
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 9.47
36 TestAddons/parallel/Ingress 49.34
37 TestAddons/parallel/InspektorGadget 10.58
38 TestAddons/parallel/MetricsServer 5.61
40 TestAddons/parallel/CSI 120.96
41 TestAddons/parallel/Headlamp 114.28
42 TestAddons/parallel/CloudSpanner 6.4
43 TestAddons/parallel/LocalPath 123.88
44 TestAddons/parallel/NvidiaDevicePlugin 6.41
45 TestAddons/parallel/Yakd 11.56
46 TestAddons/parallel/AmdGpuDevicePlugin 6.42
47 TestAddons/StoppedEnableDisable 11.09
48 TestCertOptions 24.86
49 TestCertExpiration 257.66
50 TestDockerFlags 31.31
51 TestForceSystemdFlag 35.68
52 TestForceSystemdEnv 29.04
54 TestKVMDriverInstallOrUpdate 1.2
58 TestErrorSpam/setup 21.11
59 TestErrorSpam/start 0.56
60 TestErrorSpam/status 0.83
61 TestErrorSpam/pause 1.13
62 TestErrorSpam/unpause 1.3
63 TestErrorSpam/stop 1.92
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 35.8
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 38.29
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.25
75 TestFunctional/serial/CacheCmd/cache/add_local 0.66
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.22
80 TestFunctional/serial/CacheCmd/cache/delete 0.1
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 40.15
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 0.93
86 TestFunctional/serial/LogsFileCmd 0.93
87 TestFunctional/serial/InvalidService 4.23
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 9.45
91 TestFunctional/parallel/DryRun 0.32
92 TestFunctional/parallel/InternationalLanguage 0.15
93 TestFunctional/parallel/StatusCmd 0.87
97 TestFunctional/parallel/ServiceCmdConnect 8.87
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 30.8
101 TestFunctional/parallel/SSHCmd 0.6
102 TestFunctional/parallel/CpCmd 2.06
103 TestFunctional/parallel/MySQL 22.76
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.77
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
113 TestFunctional/parallel/License 0.18
114 TestFunctional/parallel/DockerEnv/bash 1.05
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 17.27
123 TestFunctional/parallel/Version/short 0.06
124 TestFunctional/parallel/Version/components 0.69
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
129 TestFunctional/parallel/ImageCommands/ImageBuild 2.51
130 TestFunctional/parallel/ImageCommands/Setup 0.72
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.11
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.99
134 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.8
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
138 TestFunctional/parallel/MountCmd/any-port 12.95
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
146 TestFunctional/parallel/ProfileCmd/profile_list 0.36
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
148 TestFunctional/parallel/ServiceCmd/DeployApp 9.16
149 TestFunctional/parallel/MountCmd/specific-port 1.68
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
151 TestFunctional/parallel/ServiceCmd/List 1.72
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.73
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
154 TestFunctional/parallel/ServiceCmd/Format 0.5
155 TestFunctional/parallel/ServiceCmd/URL 0.5
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 99.06
164 TestMultiControlPlane/serial/DeployApp 5.79
165 TestMultiControlPlane/serial/PingHostFromPods 1.07
166 TestMultiControlPlane/serial/AddWorkerNode 20.4
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
169 TestMultiControlPlane/serial/CopyFile 15.36
170 TestMultiControlPlane/serial/StopSecondaryNode 11.43
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.65
172 TestMultiControlPlane/serial/RestartSecondaryNode 28.27
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.83
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 154.01
175 TestMultiControlPlane/serial/DeleteSecondaryNode 9.19
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
177 TestMultiControlPlane/serial/StopCluster 32.5
178 TestMultiControlPlane/serial/RestartCluster 77.26
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.63
180 TestMultiControlPlane/serial/AddSecondaryNode 41.62
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
184 TestImageBuild/serial/Setup 20.99
185 TestImageBuild/serial/NormalBuild 0.9
186 TestImageBuild/serial/BuildWithBuildArg 0.65
187 TestImageBuild/serial/BuildWithDockerIgnore 0.43
188 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.46
192 TestJSONOutput/start/Command 68.31
193 TestJSONOutput/start/Audit 0
195 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/pause/Command 0.55
199 TestJSONOutput/pause/Audit 0
201 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/unpause/Command 0.44
205 TestJSONOutput/unpause/Audit 0
207 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/stop/Command 10.85
211 TestJSONOutput/stop/Audit 0
213 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
215 TestErrorJSONOutput 0.2
217 TestKicCustomNetwork/create_custom_network 26.11
218 TestKicCustomNetwork/use_default_bridge_network 22.81
219 TestKicExistingNetwork 22.54
220 TestKicCustomSubnet 22.61
221 TestKicStaticIP 25.55
222 TestMainNoArgs 0.05
223 TestMinikubeProfile 49.48
226 TestMountStart/serial/StartWithMountFirst 9
227 TestMountStart/serial/VerifyMountFirst 0.24
228 TestMountStart/serial/StartWithMountSecond 9.03
229 TestMountStart/serial/VerifyMountSecond 0.24
230 TestMountStart/serial/DeleteFirst 1.43
231 TestMountStart/serial/VerifyMountPostDelete 0.23
232 TestMountStart/serial/Stop 1.17
233 TestMountStart/serial/RestartStopped 7.37
234 TestMountStart/serial/VerifyMountPostStop 0.24
237 TestMultiNode/serial/FreshStart2Nodes 56.69
238 TestMultiNode/serial/DeployApp2Nodes 35.56
239 TestMultiNode/serial/PingHostFrom2Pods 0.73
240 TestMultiNode/serial/AddNode 17.67
241 TestMultiNode/serial/MultiNodeLabels 0.06
242 TestMultiNode/serial/ProfileList 0.59
243 TestMultiNode/serial/CopyFile 8.63
244 TestMultiNode/serial/StopNode 2.06
245 TestMultiNode/serial/StartAfterStop 9.7
246 TestMultiNode/serial/RestartKeepsNodes 81.77
247 TestMultiNode/serial/DeleteNode 4.9
248 TestMultiNode/serial/StopMultiNode 21.31
249 TestMultiNode/serial/RestartMultiNode 47.45
250 TestMultiNode/serial/ValidateNameConflict 24.71
255 TestPreload 90.23
257 TestScheduledStopUnix 94.26
258 TestSkaffold 97.33
260 TestInsufficientStorage 12.53
261 TestRunningBinaryUpgrade 102.61
263 TestKubernetesUpgrade 324.92
264 TestMissingContainerUpgrade 130.16
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
267 TestNoKubernetes/serial/StartWithK8s 33.08
268 TestNoKubernetes/serial/StartWithStopK8s 19.23
280 TestNoKubernetes/serial/Start 9.07
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
282 TestNoKubernetes/serial/ProfileList 4.83
283 TestNoKubernetes/serial/Stop 1.2
284 TestNoKubernetes/serial/StartNoArgs 9.19
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
286 TestStoppedBinaryUpgrade/Setup 0.42
287 TestStoppedBinaryUpgrade/Upgrade 66.76
289 TestPause/serial/Start 38.84
290 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
298 TestNetworkPlugins/group/auto/Start 63.86
299 TestPause/serial/SecondStartNoReconfiguration 33.64
300 TestNetworkPlugins/group/false/Start 68.11
301 TestPause/serial/Pause 0.54
302 TestPause/serial/VerifyStatus 0.29
303 TestPause/serial/Unpause 0.41
304 TestPause/serial/PauseAgain 0.59
305 TestPause/serial/DeletePaused 2.14
306 TestPause/serial/VerifyDeletedResources 15.67
307 TestNetworkPlugins/group/kindnet/Start 60.3
308 TestNetworkPlugins/group/auto/KubeletFlags 0.27
309 TestNetworkPlugins/group/auto/NetCatPod 9.24
310 TestNetworkPlugins/group/auto/DNS 0.14
311 TestNetworkPlugins/group/auto/Localhost 0.12
312 TestNetworkPlugins/group/auto/HairPin 0.12
313 TestNetworkPlugins/group/false/KubeletFlags 0.27
314 TestNetworkPlugins/group/false/NetCatPod 9.21
315 TestNetworkPlugins/group/false/DNS 0.13
316 TestNetworkPlugins/group/false/Localhost 0.11
317 TestNetworkPlugins/group/false/HairPin 0.11
318 TestNetworkPlugins/group/flannel/Start 47.5
319 TestNetworkPlugins/group/enable-default-cni/Start 59.8
320 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
321 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
322 TestNetworkPlugins/group/kindnet/NetCatPod 9.24
323 TestNetworkPlugins/group/kindnet/DNS 0.15
324 TestNetworkPlugins/group/kindnet/Localhost 0.13
325 TestNetworkPlugins/group/kindnet/HairPin 0.16
326 TestNetworkPlugins/group/flannel/ControllerPod 6.01
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
328 TestNetworkPlugins/group/flannel/NetCatPod 9.19
329 TestNetworkPlugins/group/bridge/Start 65.69
330 TestNetworkPlugins/group/flannel/DNS 0.13
331 TestNetworkPlugins/group/flannel/Localhost 0.11
332 TestNetworkPlugins/group/flannel/HairPin 0.12
333 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
334 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.23
335 TestNetworkPlugins/group/kubenet/Start 69.65
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
339 TestNetworkPlugins/group/custom-flannel/Start 42.05
340 TestNetworkPlugins/group/calico/Start 35.49
341 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
342 TestNetworkPlugins/group/bridge/NetCatPod 9.23
343 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
344 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.19
345 TestNetworkPlugins/group/bridge/DNS 0.15
346 TestNetworkPlugins/group/bridge/Localhost 0.12
347 TestNetworkPlugins/group/bridge/HairPin 0.11
348 TestNetworkPlugins/group/custom-flannel/DNS 0.15
349 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
350 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
351 TestNetworkPlugins/group/calico/ControllerPod 19.01
352 TestNetworkPlugins/group/kubenet/KubeletFlags 0.38
353 TestNetworkPlugins/group/kubenet/NetCatPod 8.36
355 TestStartStop/group/old-k8s-version/serial/FirstStart 128.62
356 TestNetworkPlugins/group/kubenet/DNS 0.15
357 TestNetworkPlugins/group/kubenet/Localhost 0.15
358 TestNetworkPlugins/group/kubenet/HairPin 0.14
359 TestNetworkPlugins/group/calico/KubeletFlags 0.46
360 TestNetworkPlugins/group/calico/NetCatPod 10.2
362 TestStartStop/group/no-preload/serial/FirstStart 79.24
363 TestNetworkPlugins/group/calico/DNS 0.18
364 TestNetworkPlugins/group/calico/Localhost 0.16
365 TestNetworkPlugins/group/calico/HairPin 0.14
367 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 67.43
369 TestStartStop/group/newest-cni/serial/FirstStart 30.02
370 TestStartStop/group/newest-cni/serial/DeployApp 0
371 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.85
372 TestStartStop/group/newest-cni/serial/Stop 10.82
373 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
374 TestStartStop/group/newest-cni/serial/SecondStart 14.21
375 TestStartStop/group/no-preload/serial/DeployApp 9.31
376 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.31
377 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.94
378 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
381 TestStartStop/group/newest-cni/serial/Pause 2.5
382 TestStartStop/group/no-preload/serial/Stop 10.76
384 TestStartStop/group/embed-certs/serial/FirstStart 62.65
385 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
386 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.81
387 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.27
388 TestStartStop/group/no-preload/serial/SecondStart 262.9
389 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
390 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 286.77
391 TestStartStop/group/old-k8s-version/serial/DeployApp 8.44
392 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.74
393 TestStartStop/group/old-k8s-version/serial/Stop 10.86
394 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
395 TestStartStop/group/old-k8s-version/serial/SecondStart 23.23
396 TestStartStop/group/embed-certs/serial/DeployApp 10.27
397 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 24.01
398 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.89
399 TestStartStop/group/embed-certs/serial/Stop 10.94
400 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
401 TestStartStop/group/embed-certs/serial/SecondStart 262.03
402 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
403 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
404 TestStartStop/group/old-k8s-version/serial/Pause 2.45
405 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
406 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
407 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
408 TestStartStop/group/no-preload/serial/Pause 2.23
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.26
413 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
414 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
415 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
416 TestStartStop/group/embed-certs/serial/Pause 2.26
x
+
TestDownloadOnly/v1.20.0/json-events (4.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-193212 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-193212 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.321732709s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (4.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0224 11:48:25.833440  736216 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0224 11:48:25.833559  736216 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-729451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-193212
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-193212: exit status 85 (62.64088ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-193212 | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC |          |
	|         | -p download-only-193212        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 11:48:21
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 11:48:21.554295  736228 out.go:345] Setting OutFile to fd 1 ...
	I0224 11:48:21.554395  736228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 11:48:21.554403  736228 out.go:358] Setting ErrFile to fd 2...
	I0224 11:48:21.554407  736228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 11:48:21.554583  736228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
	W0224 11:48:21.554732  736228 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20451-729451/.minikube/config/config.json: open /home/jenkins/minikube-integration/20451-729451/.minikube/config/config.json: no such file or directory
	I0224 11:48:21.555268  736228 out.go:352] Setting JSON to true
	I0224 11:48:21.556245  736228 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":66650,"bootTime":1740331051,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 11:48:21.556347  736228 start.go:139] virtualization: kvm guest
	I0224 11:48:21.558601  736228 out.go:97] [download-only-193212] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	W0224 11:48:21.558714  736228 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20451-729451/.minikube/cache/preloaded-tarball: no such file or directory
	I0224 11:48:21.558763  736228 notify.go:220] Checking for updates...
	I0224 11:48:21.560035  736228 out.go:169] MINIKUBE_LOCATION=20451
	I0224 11:48:21.561307  736228 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 11:48:21.562462  736228 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20451-729451/kubeconfig
	I0224 11:48:21.563572  736228 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-729451/.minikube
	I0224 11:48:21.564610  736228 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0224 11:48:21.566754  736228 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0224 11:48:21.567007  736228 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 11:48:21.588379  736228 docker.go:123] docker version: linux-28.0.0:Docker Engine - Community
	I0224 11:48:21.588447  736228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 11:48:21.924017  736228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-24 11:48:21.915591447 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 11:48:21.924138  736228 docker.go:318] overlay module found
	I0224 11:48:21.925635  736228 out.go:97] Using the docker driver based on user configuration
	I0224 11:48:21.925664  736228 start.go:297] selected driver: docker
	I0224 11:48:21.925670  736228 start.go:901] validating driver "docker" against <nil>
	I0224 11:48:21.925759  736228 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 11:48:21.975203  736228 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-24 11:48:21.965468236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 11:48:21.975376  736228 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0224 11:48:21.975908  736228 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0224 11:48:21.976090  736228 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0224 11:48:21.977720  736228 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-193212 host does not exist
	  To start a cluster, run: "minikube start -p download-only-193212"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-193212
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (3.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-597495 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-597495 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.49969563s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (3.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0224 11:48:29.726721  736216 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0224 11:48:29.726784  736216 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20451-729451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-597495
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-597495: exit status 85 (61.667543ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-193212 | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC |                     |
	|         | -p download-only-193212        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC | 24 Feb 25 11:48 UTC |
	| delete  | -p download-only-193212        | download-only-193212 | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC | 24 Feb 25 11:48 UTC |
	| start   | -o=json --download-only        | download-only-597495 | jenkins | v1.35.0 | 24 Feb 25 11:48 UTC |                     |
	|         | -p download-only-597495        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/24 11:48:26
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0224 11:48:26.268695  736570 out.go:345] Setting OutFile to fd 1 ...
	I0224 11:48:26.268937  736570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 11:48:26.268946  736570 out.go:358] Setting ErrFile to fd 2...
	I0224 11:48:26.268950  736570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 11:48:26.269126  736570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
	I0224 11:48:26.269710  736570 out.go:352] Setting JSON to true
	I0224 11:48:26.270545  736570 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":66655,"bootTime":1740331051,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 11:48:26.270605  736570 start.go:139] virtualization: kvm guest
	I0224 11:48:26.272304  736570 out.go:97] [download-only-597495] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 11:48:26.272457  736570 notify.go:220] Checking for updates...
	I0224 11:48:26.273505  736570 out.go:169] MINIKUBE_LOCATION=20451
	I0224 11:48:26.274622  736570 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 11:48:26.275740  736570 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20451-729451/kubeconfig
	I0224 11:48:26.276786  736570 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-729451/.minikube
	I0224 11:48:26.277864  736570 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0224 11:48:26.279713  736570 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0224 11:48:26.279895  736570 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 11:48:26.300549  736570 docker.go:123] docker version: linux-28.0.0:Docker Engine - Community
	I0224 11:48:26.300622  736570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 11:48:26.350850  736570 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-24 11:48:26.342015434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 11:48:26.350996  736570 docker.go:318] overlay module found
	I0224 11:48:26.352515  736570 out.go:97] Using the docker driver based on user configuration
	I0224 11:48:26.352544  736570 start.go:297] selected driver: docker
	I0224 11:48:26.352564  736570 start.go:901] validating driver "docker" against <nil>
	I0224 11:48:26.352654  736570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 11:48:26.402145  736570 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-02-24 11:48:26.393695809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 11:48:26.402341  736570 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0224 11:48:26.402865  736570 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0224 11:48:26.403012  736570 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0224 11:48:26.404564  736570 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-597495 host does not exist
	  To start a cluster, run: "minikube start -p download-only-597495"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-597495
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.02s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-598050 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-598050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-598050
--- PASS: TestDownloadOnlyKic (1.02s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
I0224 11:48:31.386434  736216 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-957722 --alsologtostderr --binary-mirror http://127.0.0.1:37367 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-957722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-957722
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestOffline (46.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-772723 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-772723 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (43.519573344s)
helpers_test.go:175: Cleaning up "offline-docker-772723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-772723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-772723: (2.506538911s)
--- PASS: TestOffline (46.03s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-463362
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-463362: exit status 85 (54.8659ms)

                                                
                                                
-- stdout --
	* Profile "addons-463362" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-463362"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-463362
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-463362: exit status 85 (54.040658ms)

                                                
                                                
-- stdout --
	* Profile "addons-463362" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-463362"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (392.16s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-463362 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-463362 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (6m32.158296674s)
--- PASS: TestAddons/Setup (392.16s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.38s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 11.931072ms
addons_test.go:807: volcano-scheduler stabilized in 11.982312ms
addons_test.go:815: volcano-admission stabilized in 12.004221ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-pmznr" [5d018d9d-45a3-4491-8e22-b0441139536e] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003408281s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-4hjl9" [4f75356c-096e-4d64-9688-7868beb60474] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002999964s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-44zps" [9c7471b7-f06b-4235-9bf4-a045eca97058] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003308331s
addons_test.go:842: (dbg) Run:  kubectl --context addons-463362 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-463362 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-463362 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [afd52c1d-6ff3-4663-9352-561de00af6ea] Pending
helpers_test.go:344: "test-job-nginx-0" [afd52c1d-6ff3-4663-9352-561de00af6ea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [afd52c1d-6ff3-4663-9352-561de00af6ea] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.003357942s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-463362 addons disable volcano --alsologtostderr -v=1: (11.015624346s)
--- PASS: TestAddons/serial/Volcano (37.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-463362 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-463362 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-463362 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-463362 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9b3dd8d3-6e34-4c03-9762-0946ce75723e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9b3dd8d3-6e34-4c03-9762-0946ce75723e] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003396605s
addons_test.go:633: (dbg) Run:  kubectl --context addons-463362 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-463362 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-463362 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (49.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-463362 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-463362 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-463362 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c1976047-6418-4146-846a-70153533888b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c1976047-6418-4146-846a-70153533888b] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 39.002964115s
I0224 11:57:52.068776  736216 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-463362 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-463362 addons disable ingress-dns --alsologtostderr -v=1: (1.6351867s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-463362 addons disable ingress --alsologtostderr -v=1: (7.561963861s)
--- PASS: TestAddons/parallel/Ingress (49.34s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.58s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9hx5t" [7fa78e65-ce6a-47e6-8f91-9b4b6604d84d] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003569901s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-463362 addons disable inspektor-gadget --alsologtostderr -v=1: (5.57649069s)
--- PASS: TestAddons/parallel/InspektorGadget (10.58s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 46.856872ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-9ms6n" [d7c51419-cc63-4be6-8bf3-5609655033aa] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003689773s
addons_test.go:402: (dbg) Run:  kubectl --context addons-463362 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.61s)

                                                
                                    
x
+
TestAddons/parallel/CSI (120.96s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0224 11:56:12.127166  736216 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0224 11:56:12.130390  736216 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0224 11:56:12.130413  736216 kapi.go:107] duration metric: took 3.266184ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 3.275593ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-463362 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-463362 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [72e7acf3-cb9e-450e-a9ea-e4aa37f9d582] Pending
helpers_test.go:344: "task-pv-pod" [72e7acf3-cb9e-450e-a9ea-e4aa37f9d582] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [72e7acf3-cb9e-450e-a9ea-e4aa37f9d582] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 1m30.004206343s
addons_test.go:511: (dbg) Run:  kubectl --context addons-463362 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-463362 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-463362 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-463362 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-463362 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-463362 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-463362 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [da648974-cf79-41af-9ca8-14861ec74a38] Pending
helpers_test.go:344: "task-pv-pod-restore" [da648974-cf79-41af-9ca8-14861ec74a38] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00372247s
addons_test.go:553: (dbg) Run:  kubectl --context addons-463362 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-463362 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-463362 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-463362 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.426538678s)
--- PASS: TestAddons/parallel/CSI (120.96s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (114.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-463362 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-psnsn" [bb003e34-5613-452f-adbc-b2da08af773e] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-psnsn" [bb003e34-5613-452f-adbc-b2da08af773e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-psnsn" [bb003e34-5613-452f-adbc-b2da08af773e] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 1m48.003473154s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-463362 addons disable headlamp --alsologtostderr -v=1: (5.620519456s)
--- PASS: TestAddons/parallel/Headlamp (114.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-754dc876cd-79v72" [baa06b23-76c8-470d-8c1f-138705409d06] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.002267273s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.40s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (123.88s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-463362 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-463362 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [356bf5e0-aad1-40cc-9e05-df5aba5a6534] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [356bf5e0-aad1-40cc-9e05-df5aba5a6534] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [356bf5e0-aad1-40cc-9e05-df5aba5a6534] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003386663s
addons_test.go:906: (dbg) Run:  kubectl --context addons-463362 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 ssh "cat /opt/local-path-provisioner/pvc-251f9e44-6d35-49d7-bc1d-86c4bd59e9c9_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-463362 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-463362 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-463362 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.032413342s)
--- PASS: TestAddons/parallel/LocalPath (123.88s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.41s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-k78zm" [1db4514c-2559-4b82-8c66-9fa073a836ff] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003556737s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.41s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-66f9q" [2558683b-1ebb-48f3-8b0f-39706ca30053] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003778158s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-463362 addons disable yakd --alsologtostderr -v=1: (5.552405542s)
--- PASS: TestAddons/parallel/Yakd (11.56s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.42s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-g6rln" [c7655b51-9c2e-43b3-a59f-0c24440ec729] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.00440246s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-463362 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.09s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-463362
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-463362: (10.83547836s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-463362
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-463362
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-463362
--- PASS: TestAddons/StoppedEnableDisable (11.09s)

                                                
                                    
x
+
TestCertOptions (24.86s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-154321 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-154321 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (21.976613031s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-154321 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-154321 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-154321 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-154321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-154321
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-154321: (2.264808729s)
--- PASS: TestCertOptions (24.86s)

                                                
                                    
x
+
TestCertExpiration (257.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-839242 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-839242 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (54.105838689s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-839242 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-839242 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (21.399165169s)
helpers_test.go:175: Cleaning up "cert-expiration-839242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-839242
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-839242: (2.157960078s)
--- PASS: TestCertExpiration (257.66s)

                                                
                                    
x
+
TestDockerFlags (31.31s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-776004 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-776004 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (27.089930864s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-776004 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-776004 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-776004" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-776004
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-776004: (3.580286894s)
--- PASS: TestDockerFlags (31.31s)

                                                
                                    
x
+
TestForceSystemdFlag (35.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-813736 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-813736 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (33.069680076s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-813736 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-813736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-813736
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-813736: (2.246634204s)
--- PASS: TestForceSystemdFlag (35.68s)

                                                
                                    
x
+
TestForceSystemdEnv (29.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-861960 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-861960 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (26.521251566s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-861960 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-861960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-861960
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-861960: (2.151331473s)
--- PASS: TestForceSystemdEnv (29.04s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.2s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0224 12:26:23.371638  736216 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 12:26:23.371772  736216 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0224 12:26:23.400553  736216 install.go:62] docker-machine-driver-kvm2: exit status 1
W0224 12:26:23.400879  736216 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0224 12:26:23.400937  736216 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2169195261/001/docker-machine-driver-kvm2
I0224 12:26:23.507076  736216 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2169195261/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54882a0 0x54882a0 0x54882a0 0x54882a0 0x54882a0 0x54882a0 0x54882a0] Decompressors:map[bz2:0xc000015bf8 gz:0xc000015c80 tar:0xc000015c30 tar.bz2:0xc000015c40 tar.gz:0xc000015c50 tar.xz:0xc000015c60 tar.zst:0xc000015c70 tbz2:0xc000015c40 tgz:0xc000015c50 txz:0xc000015c60 tzst:0xc000015c70 xz:0xc000015c88 zip:0xc000015c90 zst:0xc000015ca0] Getters:map[file:0xc001f16eb0 http:0xc001f26730 https:0xc001f26780] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0224 12:26:23.507118  736216 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2169195261/001/docker-machine-driver-kvm2
I0224 12:26:24.085009  736216 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0224 12:26:24.085123  736216 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0224 12:26:24.117373  736216 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0224 12:26:24.117413  736216 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0224 12:26:24.117491  736216 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0224 12:26:24.117522  736216 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2169195261/002/docker-machine-driver-kvm2
I0224 12:26:24.143999  736216 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2169195261/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x54882a0 0x54882a0 0x54882a0 0x54882a0 0x54882a0 0x54882a0 0x54882a0] Decompressors:map[bz2:0xc000015bf8 gz:0xc000015c80 tar:0xc000015c30 tar.bz2:0xc000015c40 tar.gz:0xc000015c50 tar.xz:0xc000015c60 tar.zst:0xc000015c70 tbz2:0xc000015c40 tgz:0xc000015c50 txz:0xc000015c60 tzst:0xc000015c70 xz:0xc000015c88 zip:0xc000015c90 zst:0xc000015ca0] Getters:map[file:0xc001f17f30 http:0xc001f27ae0 https:0xc001f27b30] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0224 12:26:24.144061  736216 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2169195261/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.20s)

                                                
                                    
x
+
TestErrorSpam/setup (21.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-200532 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-200532 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-200532 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-200532 --driver=docker  --container-runtime=docker: (21.10807612s)
--- PASS: TestErrorSpam/setup (21.11s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 pause
--- PASS: TestErrorSpam/pause (1.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 unpause
--- PASS: TestErrorSpam/unpause (1.30s)

                                                
                                    
x
+
TestErrorSpam/stop (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 stop: (1.736722785s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-200532 --log_dir /tmp/nospam-200532 stop
--- PASS: TestErrorSpam/stop (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: /home/jenkins/minikube-integration/20451-729451/.minikube/files/etc/test/nested/copy/736216/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (35.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 start -p functional-706877 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2251: (dbg) Done: out/minikube-linux-amd64 start -p functional-706877 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (35.798613161s)
--- PASS: TestFunctional/serial/StartWithProxy (35.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0224 11:59:50.577332  736216 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-linux-amd64 start -p functional-706877 --alsologtostderr -v=8
E0224 12:00:04.363843  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:00:04.370329  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:00:04.382644  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:00:04.404054  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:00:04.446157  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:00:04.528141  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:00:04.689689  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:00:05.011621  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:00:05.653840  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:00:06.935641  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:00:09.497949  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:00:14.619662  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:00:24.861968  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-linux-amd64 start -p functional-706877 --alsologtostderr -v=8: (38.289877521s)
functional_test.go:680: soft start took 38.290681616s for "functional-706877" cluster.
I0224 12:00:28.867592  736216 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (38.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-706877 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-706877 /tmp/TestFunctionalserialCacheCmdcacheadd_local3337080238/001
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 cache add minikube-local-cache-test:functional-706877
functional_test.go:1111: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 cache delete minikube-local-cache-test:functional-706877
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-706877
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706877 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (260.522441ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 cache reload
functional_test.go:1180: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 kubectl -- --context functional-706877 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out/kubectl --context functional-706877 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-linux-amd64 start -p functional-706877 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0224 12:00:45.343396  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-linux-amd64 start -p functional-706877 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.151911519s)
functional_test.go:778: restart took 40.152070507s for "functional-706877" cluster.
I0224 12:01:13.961484  736216 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (40.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-706877 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 logs
--- PASS: TestFunctional/serial/LogsCmd (0.93s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 logs --file /tmp/TestFunctionalserialLogsFileCmd1177884280/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.93s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-706877 apply -f testdata/invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-706877
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-706877: exit status 115 (318.338293ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32179 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-706877 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706877 config get cpus: exit status 14 (74.274086ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706877 config get cpus: exit status 14 (81.497135ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-706877 --alsologtostderr -v=1]
functional_test.go:927: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-706877 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 792592: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-706877 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:991: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-706877 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (138.917556ms)

                                                
                                                
-- stdout --
	* [functional-706877] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-729451/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-729451/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:01:46.108543  792192 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:01:46.108673  792192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:01:46.108683  792192 out.go:358] Setting ErrFile to fd 2...
	I0224 12:01:46.108687  792192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:01:46.108850  792192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
	I0224 12:01:46.109464  792192 out.go:352] Setting JSON to false
	I0224 12:01:46.110630  792192 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":67455,"bootTime":1740331051,"procs":334,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 12:01:46.110693  792192 start.go:139] virtualization: kvm guest
	I0224 12:01:46.112721  792192 out.go:177] * [functional-706877] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	I0224 12:01:46.113886  792192 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 12:01:46.113916  792192 notify.go:220] Checking for updates...
	I0224 12:01:46.115989  792192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 12:01:46.117214  792192 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-729451/kubeconfig
	I0224 12:01:46.118356  792192 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-729451/.minikube
	I0224 12:01:46.119635  792192 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 12:01:46.120863  792192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 12:01:46.122325  792192 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0224 12:01:46.122775  792192 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 12:01:46.144370  792192 docker.go:123] docker version: linux-28.0.0:Docker Engine - Community
	I0224 12:01:46.144461  792192 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 12:01:46.191904  792192 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-24 12:01:46.183568187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 12:01:46.192008  792192 docker.go:318] overlay module found
	I0224 12:01:46.193668  792192 out.go:177] * Using the docker driver based on existing profile
	I0224 12:01:46.194709  792192 start.go:297] selected driver: docker
	I0224 12:01:46.194722  792192 start.go:901] validating driver "docker" against &{Name:functional-706877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-706877 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:01:46.194824  792192 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 12:01:46.196520  792192 out.go:201] 
	W0224 12:01:46.197591  792192 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0224 12:01:46.198706  792192 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:1008: (dbg) Run:  out/minikube-linux-amd64 start -p functional-706877 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 start -p functional-706877 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-706877 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (149.007152ms)

                                                
                                                
-- stdout --
	* [functional-706877] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-729451/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-729451/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:01:45.098927  791639 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:01:45.099060  791639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:01:45.099070  791639 out.go:358] Setting ErrFile to fd 2...
	I0224 12:01:45.099075  791639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:01:45.099359  791639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
	I0224 12:01:45.099922  791639 out.go:352] Setting JSON to false
	I0224 12:01:45.101055  791639 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":67454,"bootTime":1740331051,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0224 12:01:45.101126  791639 start.go:139] virtualization: kvm guest
	I0224 12:01:45.102961  791639 out.go:177] * [functional-706877] minikube v1.35.0 sur Ubuntu 20.04 (kvm/amd64)
	I0224 12:01:45.104438  791639 out.go:177]   - MINIKUBE_LOCATION=20451
	I0224 12:01:45.104469  791639 notify.go:220] Checking for updates...
	I0224 12:01:45.106539  791639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0224 12:01:45.107650  791639 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20451-729451/kubeconfig
	I0224 12:01:45.108708  791639 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-729451/.minikube
	I0224 12:01:45.109717  791639 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0224 12:01:45.110693  791639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0224 12:01:45.111973  791639 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0224 12:01:45.112432  791639 driver.go:394] Setting default libvirt URI to qemu:///system
	I0224 12:01:45.136397  791639 docker.go:123] docker version: linux-28.0.0:Docker Engine - Community
	I0224 12:01:45.136540  791639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 12:01:45.184668  791639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2025-02-24 12:01:45.175708322 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 12:01:45.184774  791639 docker.go:318] overlay module found
	I0224 12:01:45.186233  791639 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0224 12:01:45.187185  791639 start.go:297] selected driver: docker
	I0224 12:01:45.187195  791639 start.go:901] validating driver "docker" against &{Name:functional-706877 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1740046583-20436@sha256:b4324ef42fab4e243d39dd69028f243606b2fa4379f6ec916c89e512f68338f4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:functional-706877 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0224 12:01:45.187286  791639 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0224 12:01:45.189017  791639 out.go:201] 
	W0224 12:01:45.189999  791639 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0224 12:01:45.191011  791639 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 status
functional_test.go:877: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-706877 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-706877 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-gh285" [ca7dd209-9e70-4b36-bd93-21a1814853b7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-gh285" [ca7dd209-9e70-4b36-bd93-21a1814853b7] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003485738s
functional_test.go:1666: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 service hello-node-connect --url
functional_test.go:1672: found endpoint for hello-node-connect: http://192.168.49.2:32717
functional_test.go:1692: http://192.168.49.2:32717: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-gh285

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32717
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.87s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [231b34d8-f471-46de-8200-2293d010ed28] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003464147s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-706877 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-706877 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-706877 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-706877 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [201e762f-3f9d-495c-bb47-5d9c4e361bea] Pending
helpers_test.go:344: "sp-pod" [201e762f-3f9d-495c-bb47-5d9c4e361bea] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [201e762f-3f9d-495c-bb47-5d9c4e361bea] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.00377829s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-706877 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-706877 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-706877 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cd211d21-452c-4f60-8136-eb5de12c0240] Pending
helpers_test.go:344: "sp-pod" [cd211d21-452c-4f60-8136-eb5de12c0240] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cd211d21-452c-4f60-8136-eb5de12c0240] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004505393s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-706877 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "echo hello"
functional_test.go:1759: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh -n functional-706877 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 cp functional-706877:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3517686692/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh -n functional-706877 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh -n functional-706877 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-706877 replace --force -f testdata/mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-94pcj" [40f0ad12-0c1d-4920-9ef6-d5fce1a2b106] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-94pcj" [40f0ad12-0c1d-4920-9ef6-d5fce1a2b106] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.011421231s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-706877 exec mysql-58ccfd96bb-94pcj -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-706877 exec mysql-58ccfd96bb-94pcj -- mysql -ppassword -e "show databases;": exit status 1 (115.756774ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0224 12:01:37.884910  736216 retry.go:31] will retry after 855.835013ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-706877 exec mysql-58ccfd96bb-94pcj -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-706877 exec mysql-58ccfd96bb-94pcj -- mysql -ppassword -e "show databases;": exit status 1 (130.018351ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0224 12:01:38.871906  736216 retry.go:31] will retry after 2.230490749s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-706877 exec mysql-58ccfd96bb-94pcj -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-706877 exec mysql-58ccfd96bb-94pcj -- mysql -ppassword -e "show databases;": exit status 1 (115.870761ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0224 12:01:41.219216  736216 retry.go:31] will retry after 1.906886036s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-706877 exec mysql-58ccfd96bb-94pcj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.76s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/736216/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "sudo cat /etc/test/nested/copy/736216/hosts"
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/736216.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "sudo cat /etc/ssl/certs/736216.pem"
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/736216.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "sudo cat /usr/share/ca-certificates/736216.pem"
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/7362162.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "sudo cat /etc/ssl/certs/7362162.pem"
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/7362162.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "sudo cat /usr/share/ca-certificates/7362162.pem"
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-706877 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706877 ssh "sudo systemctl is-active crio": exit status 1 (396.759367ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-706877 docker-env) && out/minikube-linux-amd64 status -p functional-706877"
functional_test.go:539: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-706877 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-706877 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-706877 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-706877 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-706877 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 786675: os: process already finished
helpers_test.go:502: unable to terminate pid 786337: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-706877 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-706877 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [61fafb2f-23d3-4b33-ae25-e5eb5527c2e1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [61fafb2f-23d3-4b33-ae25-e5eb5527c2e1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 17.003121478s
I0224 12:01:39.678208  736216 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (17.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image ls --format short --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-706877 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-706877
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-706877
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-706877 image ls --format short --alsologtostderr:
I0224 12:01:53.310678  793597 out.go:345] Setting OutFile to fd 1 ...
I0224 12:01:53.310938  793597 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:01:53.310949  793597 out.go:358] Setting ErrFile to fd 2...
I0224 12:01:53.310954  793597 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:01:53.311178  793597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
I0224 12:01:53.311765  793597 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0224 12:01:53.311898  793597 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0224 12:01:53.312439  793597 cli_runner.go:164] Run: docker container inspect functional-706877 --format={{.State.Status}}
I0224 12:01:53.334313  793597 ssh_runner.go:195] Run: systemctl --version
I0224 12:01:53.334397  793597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-706877
I0224 12:01:53.350120  793597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/functional-706877/id_rsa Username:docker}
I0224 12:01:53.433763  793597 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image ls --format table --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-706877 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.32.2           | 85b7a174738ba | 97MB   |
| registry.k8s.io/kube-proxy                  | v1.32.2           | f1332858868e1 | 94MB   |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-scheduler              | v1.32.2           | d8e673e7c9983 | 69.6MB |
| docker.io/library/nginx                     | alpine            | 1ff4bb4faebcf | 47.9MB |
| registry.k8s.io/etcd                        | 3.5.16-0          | a9e7e6b294baf | 150MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-706877 | bb06ba1ac2b12 | 30B    |
| docker.io/library/nginx                     | latest            | 97662d24417b3 | 192MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-controller-manager     | v1.32.2           | b6a454c5a800d | 89.7MB |
| docker.io/kicbase/echo-server               | functional-706877 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-706877 image ls --format table --alsologtostderr:
I0224 12:01:53.887443  793953 out.go:345] Setting OutFile to fd 1 ...
I0224 12:01:53.887652  793953 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:01:53.887677  793953 out.go:358] Setting ErrFile to fd 2...
I0224 12:01:53.887688  793953 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:01:53.887904  793953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
I0224 12:01:53.888574  793953 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0224 12:01:53.888745  793953 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0224 12:01:53.889138  793953 cli_runner.go:164] Run: docker container inspect functional-706877 --format={{.State.Status}}
I0224 12:01:53.906669  793953 ssh_runner.go:195] Run: systemctl --version
I0224 12:01:53.906729  793953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-706877
I0224 12:01:53.925585  793953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/functional-706877/id_rsa Username:docker}
I0224 12:01:54.017805  793953 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image ls --format json --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-706877 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-706877"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47900000"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"150000000"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"69600000"},{"id":"b6a454c5a800d201daacea
d6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"89700000"},{"id":"97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry
.k8s.io/pause:latest"],"size":"240000"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"97000000"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"94000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"bb06ba1a
c2b1224c1434d6d6d5f14548c8c9e519fa50e1436a9adf3ca2bcadeb","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-706877"],"size":"30"}]
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-706877 image ls --format json --alsologtostderr:
I0224 12:01:53.640066  793777 out.go:345] Setting OutFile to fd 1 ...
I0224 12:01:53.640671  793777 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:01:53.641532  793777 out.go:358] Setting ErrFile to fd 2...
I0224 12:01:53.641544  793777 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:01:53.641953  793777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
I0224 12:01:53.643365  793777 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0224 12:01:53.643604  793777 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0224 12:01:53.643991  793777 cli_runner.go:164] Run: docker container inspect functional-706877 --format={{.State.Status}}
I0224 12:01:53.672208  793777 ssh_runner.go:195] Run: systemctl --version
I0224 12:01:53.672279  793777 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-706877
I0224 12:01:53.694638  793777 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/functional-706877/id_rsa Username:docker}
I0224 12:01:53.778579  793777 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image ls --format yaml --alsologtostderr
functional_test.go:283: (dbg) Stdout: out/minikube-linux-amd64 -p functional-706877 image ls --format yaml --alsologtostderr:
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "69600000"
- id: 97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: bb06ba1ac2b1224c1434d6d6d5f14548c8c9e519fa50e1436a9adf3ca2bcadeb
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-706877
size: "30"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "89700000"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "94000000"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "150000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-706877
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "97000000"
- id: 1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47900000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-linux-amd64 -p functional-706877 image ls --format yaml --alsologtostderr:
I0224 12:01:53.407883  793651 out.go:345] Setting OutFile to fd 1 ...
I0224 12:01:53.408106  793651 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:01:53.408115  793651 out.go:358] Setting ErrFile to fd 2...
I0224 12:01:53.408119  793651 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:01:53.408291  793651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
I0224 12:01:53.408934  793651 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0224 12:01:53.409031  793651 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0224 12:01:53.409476  793651 cli_runner.go:164] Run: docker container inspect functional-706877 --format={{.State.Status}}
I0224 12:01:53.426691  793651 ssh_runner.go:195] Run: systemctl --version
I0224 12:01:53.426740  793651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-706877
I0224 12:01:53.444450  793651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/functional-706877/id_rsa Username:docker}
I0224 12:01:53.530048  793651 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706877 ssh pgrep buildkitd: exit status 1 (265.767116ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image build -t localhost/my-image:functional-706877 testdata/build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-linux-amd64 -p functional-706877 image build -t localhost/my-image:functional-706877 testdata/build --alsologtostderr: (2.051985864s)
functional_test.go:340: (dbg) Stderr: out/minikube-linux-amd64 -p functional-706877 image build -t localhost/my-image:functional-706877 testdata/build --alsologtostderr:
I0224 12:01:53.784489  793916 out.go:345] Setting OutFile to fd 1 ...
I0224 12:01:53.784648  793916 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:01:53.784679  793916 out.go:358] Setting ErrFile to fd 2...
I0224 12:01:53.784694  793916 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0224 12:01:53.785027  793916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
I0224 12:01:53.796721  793916 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0224 12:01:53.797521  793916 config.go:182] Loaded profile config "functional-706877": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0224 12:01:53.798113  793916 cli_runner.go:164] Run: docker container inspect functional-706877 --format={{.State.Status}}
I0224 12:01:53.820196  793916 ssh_runner.go:195] Run: systemctl --version
I0224 12:01:53.820257  793916 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-706877
I0224 12:01:53.843274  793916 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/functional-706877/id_rsa Username:docker}
I0224 12:01:53.937620  793916 build_images.go:161] Building image from path: /tmp/build.1730659403.tar
I0224 12:01:53.937686  793916 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0224 12:01:53.959313  793916 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1730659403.tar
I0224 12:01:53.962524  793916 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1730659403.tar: stat -c "%s %y" /var/lib/minikube/build/build.1730659403.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1730659403.tar': No such file or directory
I0224 12:01:53.962547  793916 ssh_runner.go:362] scp /tmp/build.1730659403.tar --> /var/lib/minikube/build/build.1730659403.tar (3072 bytes)
I0224 12:01:53.986036  793916 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1730659403
I0224 12:01:53.993810  793916 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1730659403 -xf /var/lib/minikube/build/build.1730659403.tar
I0224 12:01:54.002019  793916 docker.go:360] Building image: /var/lib/minikube/build/build.1730659403
I0224 12:01:54.002064  793916 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-706877 /var/lib/minikube/build/build.1730659403
2025/02/24 12:01:54 in: []string{}
2025/02/24 12:01:54 Parsed entitlements: []
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:b02be8479f91d21bab9890bd10a3997c79ff92edb0f645b3b020b3ec4ff77700 done
#8 naming to localhost/my-image:functional-706877 done
#8 DONE 0.0s
I0224 12:01:55.763501  793916 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-706877 /var/lib/minikube/build/build.1730659403: (1.761415289s)
I0224 12:01:55.763578  793916 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1730659403
I0224 12:01:55.772043  793916 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1730659403.tar
I0224 12:01:55.780667  793916 build_images.go:217] Built localhost/my-image:functional-706877 from /tmp/build.1730659403.tar
I0224 12:01:55.780717  793916 build_images.go:133] succeeded building to: functional-706877
I0224 12:01:55.780722  793916 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-706877
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image load --daemon kicbase/echo-server:functional-706877 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image load --daemon kicbase/echo-server:functional-706877 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-706877
functional_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image load --daemon kicbase/echo-server:functional-706877 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image ls
E0224 12:01:26.305109  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image save kicbase/echo-server:functional-706877 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image rm kicbase/echo-server:functional-706877 --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-706877
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 image save --daemon kicbase/echo-server:functional-706877 --alsologtostderr
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-706877
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-706877 /tmp/TestFunctionalparallelMountCmdany-port1054653847/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1740398488660096626" to /tmp/TestFunctionalparallelMountCmdany-port1054653847/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1740398488660096626" to /tmp/TestFunctionalparallelMountCmdany-port1054653847/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1740398488660096626" to /tmp/TestFunctionalparallelMountCmdany-port1054653847/001/test-1740398488660096626
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706877 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (306.892657ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0224 12:01:28.967375  736216 retry.go:31] will retry after 722.517551ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 24 12:01 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 24 12:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 24 12:01 test-1740398488660096626
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh cat /mount-9p/test-1740398488660096626
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-706877 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1c085a70-2229-4263-b7a2-40536035b723] Pending
helpers_test.go:344: "busybox-mount" [1c085a70-2229-4263-b7a2-40536035b723] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1c085a70-2229-4263-b7a2-40536035b723] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1c085a70-2229-4263-b7a2-40536035b723] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.004256904s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-706877 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-706877 /tmp/TestFunctionalparallelMountCmdany-port1054653847/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.95s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-706877 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.153.107 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-706877 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1332: Took "302.732989ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1346: Took "53.570279ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1383: Took "337.267248ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1396: Took "50.329876ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-706877 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-706877 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-b5jfb" [3cb3bf22-36ec-44e4-9684-fff4b204d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-b5jfb" [3cb3bf22-36ec-44e4-9684-fff4b204d0fe] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.003629267s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-706877 /tmp/TestFunctionalparallelMountCmdspecific-port2091904313/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706877 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (362.31267ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0224 12:01:41.974081  736216 retry.go:31] will retry after 292.23979ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-706877 /tmp/TestFunctionalparallelMountCmdspecific-port2091904313/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706877 ssh "sudo umount -f /mount-9p": exit status 1 (275.029669ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-706877 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-706877 /tmp/TestFunctionalparallelMountCmdspecific-port2091904313/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-706877 /tmp/TestFunctionalparallelMountCmdVerifyCleanup183277272/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-706877 /tmp/TestFunctionalparallelMountCmdVerifyCleanup183277272/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-706877 /tmp/TestFunctionalparallelMountCmdVerifyCleanup183277272/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-706877 ssh "findmnt -T" /mount1: exit status 1 (376.953051ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0224 12:01:43.668254  736216 retry.go:31] will retry after 563.513372ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-706877 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-706877 /tmp/TestFunctionalparallelMountCmdVerifyCleanup183277272/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-706877 /tmp/TestFunctionalparallelMountCmdVerifyCleanup183277272/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-706877 /tmp/TestFunctionalparallelMountCmdVerifyCleanup183277272/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 service list
functional_test.go:1476: (dbg) Done: out/minikube-linux-amd64 -p functional-706877 service list: (1.722619266s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 service list -o json
functional_test.go:1506: (dbg) Done: out/minikube-linux-amd64 -p functional-706877 service list -o json: (1.730895956s)
functional_test.go:1511: Took "1.730981671s" to run "out/minikube-linux-amd64 -p functional-706877 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 service --namespace=default --https --url hello-node
functional_test.go:1539: found endpoint: https://192.168.49.2:31877
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-linux-amd64 -p functional-706877 service hello-node --url
functional_test.go:1582: found endpoint for hello-node: http://192.168.49.2:31877
2025/02/24 12:01:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-706877
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-706877
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-706877
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (99.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-468395 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0224 12:02:48.226849  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-468395 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m38.420008267s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (99.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-468395 -- rollout status deployment/busybox: (3.805465912s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-4cccl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-9pkj4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-tpxzl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-4cccl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-9pkj4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-tpxzl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-4cccl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-9pkj4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-tpxzl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-4cccl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-4cccl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-9pkj4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-9pkj4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-tpxzl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-468395 -- exec busybox-58667487b6-tpxzl -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-468395 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-468395 -v=7 --alsologtostderr: (19.597714506s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-468395 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp testdata/cp-test.txt ha-468395:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2296360165/001/cp-test_ha-468395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395:/home/docker/cp-test.txt ha-468395-m02:/home/docker/cp-test_ha-468395_ha-468395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m02 "sudo cat /home/docker/cp-test_ha-468395_ha-468395-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395:/home/docker/cp-test.txt ha-468395-m03:/home/docker/cp-test_ha-468395_ha-468395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m03 "sudo cat /home/docker/cp-test_ha-468395_ha-468395-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395:/home/docker/cp-test.txt ha-468395-m04:/home/docker/cp-test_ha-468395_ha-468395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m04 "sudo cat /home/docker/cp-test_ha-468395_ha-468395-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp testdata/cp-test.txt ha-468395-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2296360165/001/cp-test_ha-468395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395-m02:/home/docker/cp-test.txt ha-468395:/home/docker/cp-test_ha-468395-m02_ha-468395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395 "sudo cat /home/docker/cp-test_ha-468395-m02_ha-468395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395-m02:/home/docker/cp-test.txt ha-468395-m03:/home/docker/cp-test_ha-468395-m02_ha-468395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m03 "sudo cat /home/docker/cp-test_ha-468395-m02_ha-468395-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395-m02:/home/docker/cp-test.txt ha-468395-m04:/home/docker/cp-test_ha-468395-m02_ha-468395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m04 "sudo cat /home/docker/cp-test_ha-468395-m02_ha-468395-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp testdata/cp-test.txt ha-468395-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2296360165/001/cp-test_ha-468395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395-m03:/home/docker/cp-test.txt ha-468395:/home/docker/cp-test_ha-468395-m03_ha-468395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395 "sudo cat /home/docker/cp-test_ha-468395-m03_ha-468395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395-m03:/home/docker/cp-test.txt ha-468395-m02:/home/docker/cp-test_ha-468395-m03_ha-468395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m02 "sudo cat /home/docker/cp-test_ha-468395-m03_ha-468395-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395-m03:/home/docker/cp-test.txt ha-468395-m04:/home/docker/cp-test_ha-468395-m03_ha-468395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m04 "sudo cat /home/docker/cp-test_ha-468395-m03_ha-468395-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp testdata/cp-test.txt ha-468395-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2296360165/001/cp-test_ha-468395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395-m04:/home/docker/cp-test.txt ha-468395:/home/docker/cp-test_ha-468395-m04_ha-468395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395 "sudo cat /home/docker/cp-test_ha-468395-m04_ha-468395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395-m04:/home/docker/cp-test.txt ha-468395-m02:/home/docker/cp-test_ha-468395-m04_ha-468395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m02 "sudo cat /home/docker/cp-test_ha-468395-m04_ha-468395-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 cp ha-468395-m04:/home/docker/cp-test.txt ha-468395-m03:/home/docker/cp-test_ha-468395-m04_ha-468395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 ssh -n ha-468395-m03 "sudo cat /home/docker/cp-test_ha-468395-m04_ha-468395-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-468395 node stop m02 -v=7 --alsologtostderr: (10.796651085s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-468395 status -v=7 --alsologtostderr: exit status 7 (636.738518ms)

                                                
                                                
-- stdout --
	ha-468395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-468395-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-468395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-468395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:04:32.361076  820992 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:04:32.361378  820992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:04:32.361390  820992 out.go:358] Setting ErrFile to fd 2...
	I0224 12:04:32.361395  820992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:04:32.361612  820992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
	I0224 12:04:32.361817  820992 out.go:352] Setting JSON to false
	I0224 12:04:32.361848  820992 mustload.go:65] Loading cluster: ha-468395
	I0224 12:04:32.361970  820992 notify.go:220] Checking for updates...
	I0224 12:04:32.362271  820992 config.go:182] Loaded profile config "ha-468395": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0224 12:04:32.362293  820992 status.go:174] checking status of ha-468395 ...
	I0224 12:04:32.362711  820992 cli_runner.go:164] Run: docker container inspect ha-468395 --format={{.State.Status}}
	I0224 12:04:32.382115  820992 status.go:371] ha-468395 host status = "Running" (err=<nil>)
	I0224 12:04:32.382211  820992 host.go:66] Checking if "ha-468395" exists ...
	I0224 12:04:32.382578  820992 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-468395
	I0224 12:04:32.401775  820992 host.go:66] Checking if "ha-468395" exists ...
	I0224 12:04:32.402102  820992 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:04:32.402138  820992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-468395
	I0224 12:04:32.419477  820992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/ha-468395/id_rsa Username:docker}
	I0224 12:04:32.502203  820992 ssh_runner.go:195] Run: systemctl --version
	I0224 12:04:32.506648  820992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:04:32.516823  820992 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 12:04:32.569604  820992 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:true NGoroutines:73 SystemTime:2025-02-24 12:04:32.559637012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 12:04:32.570408  820992 kubeconfig.go:125] found "ha-468395" server: "https://192.168.49.254:8443"
	I0224 12:04:32.570452  820992 api_server.go:166] Checking apiserver status ...
	I0224 12:04:32.570495  820992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 12:04:32.582104  820992 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2395/cgroup
	I0224 12:04:32.591034  820992 api_server.go:182] apiserver freezer: "9:freezer:/docker/180ef92dda88ce5f5dad67fef9502354f7a03e18ac279cdce0c18c7b86ccf93d/kubepods/burstable/pod56bd58847bac218d3583ba60b4a7f8de/3eb3f9af2d443bcb0a10b96ca92a69a71314ca7ae964a1bed3c5fa6a42d31af5"
	I0224 12:04:32.591090  820992 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/180ef92dda88ce5f5dad67fef9502354f7a03e18ac279cdce0c18c7b86ccf93d/kubepods/burstable/pod56bd58847bac218d3583ba60b4a7f8de/3eb3f9af2d443bcb0a10b96ca92a69a71314ca7ae964a1bed3c5fa6a42d31af5/freezer.state
	I0224 12:04:32.598857  820992 api_server.go:204] freezer state: "THAWED"
	I0224 12:04:32.598883  820992 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0224 12:04:32.602816  820992 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0224 12:04:32.602837  820992 status.go:463] ha-468395 apiserver status = Running (err=<nil>)
	I0224 12:04:32.602846  820992 status.go:176] ha-468395 status: &{Name:ha-468395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:04:32.602860  820992 status.go:174] checking status of ha-468395-m02 ...
	I0224 12:04:32.603116  820992 cli_runner.go:164] Run: docker container inspect ha-468395-m02 --format={{.State.Status}}
	I0224 12:04:32.620249  820992 status.go:371] ha-468395-m02 host status = "Stopped" (err=<nil>)
	I0224 12:04:32.620268  820992 status.go:384] host is not running, skipping remaining checks
	I0224 12:04:32.620274  820992 status.go:176] ha-468395-m02 status: &{Name:ha-468395-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:04:32.620304  820992 status.go:174] checking status of ha-468395-m03 ...
	I0224 12:04:32.620554  820992 cli_runner.go:164] Run: docker container inspect ha-468395-m03 --format={{.State.Status}}
	I0224 12:04:32.638009  820992 status.go:371] ha-468395-m03 host status = "Running" (err=<nil>)
	I0224 12:04:32.638040  820992 host.go:66] Checking if "ha-468395-m03" exists ...
	I0224 12:04:32.638400  820992 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-468395-m03
	I0224 12:04:32.659212  820992 host.go:66] Checking if "ha-468395-m03" exists ...
	I0224 12:04:32.659474  820992 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:04:32.659518  820992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-468395-m03
	I0224 12:04:32.676655  820992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/ha-468395-m03/id_rsa Username:docker}
	I0224 12:04:32.758363  820992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:04:32.769439  820992 kubeconfig.go:125] found "ha-468395" server: "https://192.168.49.254:8443"
	I0224 12:04:32.769470  820992 api_server.go:166] Checking apiserver status ...
	I0224 12:04:32.769505  820992 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 12:04:32.780038  820992 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2246/cgroup
	I0224 12:04:32.789018  820992 api_server.go:182] apiserver freezer: "9:freezer:/docker/5f99ae06a5ace548e0419cc841d313bf785da07b416d002480f3e0fc7a3b6a55/kubepods/burstable/pod32fb445bce2380cf31c4cea8373e903d/4dc16ae468192b68a7d944d6858d61cf6dcb71af51335e633715717ab791fb4b"
	I0224 12:04:32.789075  820992 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5f99ae06a5ace548e0419cc841d313bf785da07b416d002480f3e0fc7a3b6a55/kubepods/burstable/pod32fb445bce2380cf31c4cea8373e903d/4dc16ae468192b68a7d944d6858d61cf6dcb71af51335e633715717ab791fb4b/freezer.state
	I0224 12:04:32.796582  820992 api_server.go:204] freezer state: "THAWED"
	I0224 12:04:32.796612  820992 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0224 12:04:32.800723  820992 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0224 12:04:32.800745  820992 status.go:463] ha-468395-m03 apiserver status = Running (err=<nil>)
	I0224 12:04:32.800754  820992 status.go:176] ha-468395-m03 status: &{Name:ha-468395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:04:32.800777  820992 status.go:174] checking status of ha-468395-m04 ...
	I0224 12:04:32.801040  820992 cli_runner.go:164] Run: docker container inspect ha-468395-m04 --format={{.State.Status}}
	I0224 12:04:32.818774  820992 status.go:371] ha-468395-m04 host status = "Running" (err=<nil>)
	I0224 12:04:32.818823  820992 host.go:66] Checking if "ha-468395-m04" exists ...
	I0224 12:04:32.819152  820992 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-468395-m04
	I0224 12:04:32.835917  820992 host.go:66] Checking if "ha-468395-m04" exists ...
	I0224 12:04:32.836155  820992 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:04:32.836195  820992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-468395-m04
	I0224 12:04:32.853092  820992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/ha-468395-m04/id_rsa Username:docker}
	I0224 12:04:32.933939  820992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:04:32.944770  820992 status.go:176] ha-468395-m04 status: &{Name:ha-468395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-468395 node start m02 -v=7 --alsologtostderr: (27.242622869s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (154.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-468395 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-468395 -v=7 --alsologtostderr
E0224 12:05:04.364442  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:05:32.068759  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-468395 -v=7 --alsologtostderr: (33.551557461s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-468395 --wait=true -v=7 --alsologtostderr
E0224 12:06:20.758703  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:06:20.765089  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:06:20.776452  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:06:20.797780  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:06:20.839146  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:06:20.920548  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:06:21.082116  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:06:21.404374  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:06:22.046415  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:06:23.328648  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:06:25.890374  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:06:31.012656  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:06:41.254989  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:07:01.737196  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-468395 --wait=true -v=7 --alsologtostderr: (2m0.345192944s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-468395
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (154.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 node delete m03 -v=7 --alsologtostderr
E0224 12:07:42.698787  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-468395 node delete m03 -v=7 --alsologtostderr: (8.455566216s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-468395 stop -v=7 --alsologtostderr: (32.394187082s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-468395 status -v=7 --alsologtostderr: exit status 7 (101.161769ms)

                                                
                                                
-- stdout --
	ha-468395
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-468395-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-468395-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:08:18.963025  849834 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:08:18.963128  849834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:08:18.963136  849834 out.go:358] Setting ErrFile to fd 2...
	I0224 12:08:18.963140  849834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:08:18.963327  849834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
	I0224 12:08:18.963495  849834 out.go:352] Setting JSON to false
	I0224 12:08:18.963525  849834 mustload.go:65] Loading cluster: ha-468395
	I0224 12:08:18.963648  849834 notify.go:220] Checking for updates...
	I0224 12:08:18.963931  849834 config.go:182] Loaded profile config "ha-468395": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0224 12:08:18.963954  849834 status.go:174] checking status of ha-468395 ...
	I0224 12:08:18.964353  849834 cli_runner.go:164] Run: docker container inspect ha-468395 --format={{.State.Status}}
	I0224 12:08:18.982371  849834 status.go:371] ha-468395 host status = "Stopped" (err=<nil>)
	I0224 12:08:18.982418  849834 status.go:384] host is not running, skipping remaining checks
	I0224 12:08:18.982432  849834 status.go:176] ha-468395 status: &{Name:ha-468395 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:08:18.982467  849834 status.go:174] checking status of ha-468395-m02 ...
	I0224 12:08:18.982853  849834 cli_runner.go:164] Run: docker container inspect ha-468395-m02 --format={{.State.Status}}
	I0224 12:08:18.999285  849834 status.go:371] ha-468395-m02 host status = "Stopped" (err=<nil>)
	I0224 12:08:18.999305  849834 status.go:384] host is not running, skipping remaining checks
	I0224 12:08:18.999311  849834 status.go:176] ha-468395-m02 status: &{Name:ha-468395-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:08:18.999330  849834 status.go:174] checking status of ha-468395-m04 ...
	I0224 12:08:18.999551  849834 cli_runner.go:164] Run: docker container inspect ha-468395-m04 --format={{.State.Status}}
	I0224 12:08:19.014925  849834 status.go:371] ha-468395-m04 host status = "Stopped" (err=<nil>)
	I0224 12:08:19.014947  849834 status.go:384] host is not running, skipping remaining checks
	I0224 12:08:19.014954  849834 status.go:176] ha-468395-m04 status: &{Name:ha-468395-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (77.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-468395 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0224 12:09:04.620827  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-468395 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m16.519043736s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (77.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-468395 --control-plane -v=7 --alsologtostderr
E0224 12:10:04.363891  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-468395 --control-plane -v=7 --alsologtostderr: (40.810932926s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-468395 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (20.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-544494 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-544494 --driver=docker  --container-runtime=docker: (20.988776173s)
--- PASS: TestImageBuild/serial/Setup (20.99s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-544494
--- PASS: TestImageBuild/serial/NormalBuild (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-544494
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-544494
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-544494
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.46s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-370044 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0224 12:11:20.758613  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:11:48.465314  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-370044 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m8.308290358s)
--- PASS: TestJSONOutput/start/Command (68.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-370044 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-370044 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-370044 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-370044 --output=json --user=testUser: (10.850774437s)
--- PASS: TestJSONOutput/stop/Command (10.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-246459 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-246459 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (64.268825ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"39cdbd7c-59db-4c9f-9702-2e4090ee1879","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-246459] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"16855828-3f44-4191-99f7-b80614ac3924","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20451"}}
	{"specversion":"1.0","id":"c0c0b495-5834-40a6-b8cb-682f55bb0fbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d1b5ce32-3100-46e5-8a1e-12074efcd217","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20451-729451/kubeconfig"}}
	{"specversion":"1.0","id":"6c1dfce1-e57a-4be7-bbe4-ee28dccfbe04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-729451/.minikube"}}
	{"specversion":"1.0","id":"e170ea20-5aff-4e07-816e-f581392c49a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8001ae45-56e2-42ca-be7e-c0b207e6edc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6984b298-bca5-4da5-b5aa-246af6118237","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-246459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-246459
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.11s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-356114 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-356114 --network=: (24.028657771s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-356114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-356114
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-356114: (2.063139399s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.11s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-236927 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-236927 --network=bridge: (20.9215913s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-236927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-236927
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-236927: (1.867096634s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.81s)

                                                
                                    
x
+
TestKicExistingNetwork (22.54s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0224 12:13:03.341238  736216 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0224 12:13:03.356572  736216 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0224 12:13:03.356632  736216 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0224 12:13:03.356651  736216 cli_runner.go:164] Run: docker network inspect existing-network
W0224 12:13:03.371884  736216 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0224 12:13:03.371916  736216 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0224 12:13:03.371930  736216 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0224 12:13:03.372043  736216 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0224 12:13:03.387412  736216 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3dce98b05f69 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:05:14:ac:57:47} reservation:<nil>}
I0224 12:13:03.387842  736216 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fe7760}
I0224 12:13:03.387867  736216 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0224 12:13:03.387910  736216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0224 12:13:03.429676  736216 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-698837 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-698837 --network=existing-network: (20.524214684s)
helpers_test.go:175: Cleaning up "existing-network-698837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-698837
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-698837: (1.891816385s)
I0224 12:13:25.862503  736216 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.54s)

                                                
                                    
x
+
TestKicCustomSubnet (22.61s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-569096 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-569096 --subnet=192.168.60.0/24: (20.501124689s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-569096 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-569096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-569096
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-569096: (2.090510083s)
--- PASS: TestKicCustomSubnet (22.61s)

                                                
                                    
x
+
TestKicStaticIP (25.55s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-249720 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-249720 --static-ip=192.168.200.200: (23.362614075s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-249720 ip
helpers_test.go:175: Cleaning up "static-ip-249720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-249720
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-249720: (2.064648318s)
--- PASS: TestKicStaticIP (25.55s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (49.48s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-726801 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-726801 --driver=docker  --container-runtime=docker: (20.363038181s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-738595 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-738595 --driver=docker  --container-runtime=docker: (23.91022661s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-726801
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-738595
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-738595" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-738595
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-738595: (2.01549274s)
helpers_test.go:175: Cleaning up "first-726801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-726801
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-726801: (2.091565473s)
--- PASS: TestMinikubeProfile (49.48s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-738318 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0224 12:15:04.367321  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-738318 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.997058742s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-738318 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-753171 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-753171 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.027520411s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-753171 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.43s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-738318 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-738318 --alsologtostderr -v=5: (1.434668322s)
--- PASS: TestMountStart/serial/DeleteFirst (1.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-753171 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-753171
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-753171: (1.168059551s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.37s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-753171
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-753171: (6.368374947s)
--- PASS: TestMountStart/serial/RestartStopped (7.37s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-753171 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (56.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-824151 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0224 12:16:20.758606  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:16:27.430126  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-824151 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (56.262233066s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (56.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (35.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-824151 -- rollout status deployment/busybox: (2.279653468s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0224 12:16:33.363206  736216 retry.go:31] will retry after 1.485355899s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0224 12:16:34.962163  736216 retry.go:31] will retry after 1.329009335s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0224 12:16:36.404199  736216 retry.go:31] will retry after 1.211793171s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0224 12:16:37.728919  736216 retry.go:31] will retry after 5.019470643s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0224 12:16:42.865021  736216 retry.go:31] will retry after 5.707863677s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0224 12:16:48.688295  736216 retry.go:31] will retry after 4.446139192s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0224 12:16:53.247575  736216 retry.go:31] will retry after 11.90757683s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- exec busybox-58667487b6-5qlfk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- exec busybox-58667487b6-pxftg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- exec busybox-58667487b6-5qlfk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- exec busybox-58667487b6-pxftg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- exec busybox-58667487b6-5qlfk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- exec busybox-58667487b6-pxftg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (35.56s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- exec busybox-58667487b6-5qlfk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- exec busybox-58667487b6-5qlfk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- exec busybox-58667487b6-pxftg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-824151 -- exec busybox-58667487b6-pxftg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-824151 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-824151 -v 3 --alsologtostderr: (17.066024998s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.67s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-824151 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 cp testdata/cp-test.txt multinode-824151:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 cp multinode-824151:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3174765308/001/cp-test_multinode-824151.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 cp multinode-824151:/home/docker/cp-test.txt multinode-824151-m02:/home/docker/cp-test_multinode-824151_multinode-824151-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151-m02 "sudo cat /home/docker/cp-test_multinode-824151_multinode-824151-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 cp multinode-824151:/home/docker/cp-test.txt multinode-824151-m03:/home/docker/cp-test_multinode-824151_multinode-824151-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151-m03 "sudo cat /home/docker/cp-test_multinode-824151_multinode-824151-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 cp testdata/cp-test.txt multinode-824151-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 cp multinode-824151-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3174765308/001/cp-test_multinode-824151-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 cp multinode-824151-m02:/home/docker/cp-test.txt multinode-824151:/home/docker/cp-test_multinode-824151-m02_multinode-824151.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151 "sudo cat /home/docker/cp-test_multinode-824151-m02_multinode-824151.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 cp multinode-824151-m02:/home/docker/cp-test.txt multinode-824151-m03:/home/docker/cp-test_multinode-824151-m02_multinode-824151-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151-m03 "sudo cat /home/docker/cp-test_multinode-824151-m02_multinode-824151-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 cp testdata/cp-test.txt multinode-824151-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 cp multinode-824151-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3174765308/001/cp-test_multinode-824151-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 cp multinode-824151-m03:/home/docker/cp-test.txt multinode-824151:/home/docker/cp-test_multinode-824151-m03_multinode-824151.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151 "sudo cat /home/docker/cp-test_multinode-824151-m03_multinode-824151.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 cp multinode-824151-m03:/home/docker/cp-test.txt multinode-824151-m02:/home/docker/cp-test_multinode-824151-m03_multinode-824151-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 ssh -n multinode-824151-m02 "sudo cat /home/docker/cp-test_multinode-824151-m03_multinode-824151-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-824151 node stop m03: (1.174015117s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-824151 status: exit status 7 (437.668885ms)

                                                
                                                
-- stdout --
	multinode-824151
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-824151-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-824151-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-824151 status --alsologtostderr: exit status 7 (447.654771ms)

                                                
                                                
-- stdout --
	multinode-824151
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-824151-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-824151-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:17:35.713514  934845 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:17:35.713771  934845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:17:35.713780  934845 out.go:358] Setting ErrFile to fd 2...
	I0224 12:17:35.713785  934845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:17:35.713996  934845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
	I0224 12:17:35.714188  934845 out.go:352] Setting JSON to false
	I0224 12:17:35.714218  934845 mustload.go:65] Loading cluster: multinode-824151
	I0224 12:17:35.714282  934845 notify.go:220] Checking for updates...
	I0224 12:17:35.714759  934845 config.go:182] Loaded profile config "multinode-824151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0224 12:17:35.714791  934845 status.go:174] checking status of multinode-824151 ...
	I0224 12:17:35.715423  934845 cli_runner.go:164] Run: docker container inspect multinode-824151 --format={{.State.Status}}
	I0224 12:17:35.735491  934845 status.go:371] multinode-824151 host status = "Running" (err=<nil>)
	I0224 12:17:35.735518  934845 host.go:66] Checking if "multinode-824151" exists ...
	I0224 12:17:35.735854  934845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-824151
	I0224 12:17:35.751443  934845 host.go:66] Checking if "multinode-824151" exists ...
	I0224 12:17:35.751685  934845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:17:35.751718  934845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-824151
	I0224 12:17:35.768593  934845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/multinode-824151/id_rsa Username:docker}
	I0224 12:17:35.850166  934845 ssh_runner.go:195] Run: systemctl --version
	I0224 12:17:35.854365  934845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:17:35.864611  934845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0224 12:17:35.912674  934845 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:63 SystemTime:2025-02-24 12:17:35.904002826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647988736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:28.0.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.21.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.33.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0224 12:17:35.913312  934845 kubeconfig.go:125] found "multinode-824151" server: "https://192.168.67.2:8443"
	I0224 12:17:35.913350  934845 api_server.go:166] Checking apiserver status ...
	I0224 12:17:35.913390  934845 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0224 12:17:35.924695  934845 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2377/cgroup
	I0224 12:17:35.933145  934845 api_server.go:182] apiserver freezer: "9:freezer:/docker/ba562c9ffd94895f33302ecbe0bafef9302b18484fa8a5f85afe7db04561fc0a/kubepods/burstable/poddca1e7be68836393855341f843a835d8/0858b8fba637c4e8cdf9b7a6b8e7f34478aa38fd4a671c9de56011fa8bdffded"
	I0224 12:17:35.933245  934845 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ba562c9ffd94895f33302ecbe0bafef9302b18484fa8a5f85afe7db04561fc0a/kubepods/burstable/poddca1e7be68836393855341f843a835d8/0858b8fba637c4e8cdf9b7a6b8e7f34478aa38fd4a671c9de56011fa8bdffded/freezer.state
	I0224 12:17:35.940678  934845 api_server.go:204] freezer state: "THAWED"
	I0224 12:17:35.940704  934845 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0224 12:17:35.945216  934845 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0224 12:17:35.945239  934845 status.go:463] multinode-824151 apiserver status = Running (err=<nil>)
	I0224 12:17:35.945253  934845 status.go:176] multinode-824151 status: &{Name:multinode-824151 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:17:35.945274  934845 status.go:174] checking status of multinode-824151-m02 ...
	I0224 12:17:35.945551  934845 cli_runner.go:164] Run: docker container inspect multinode-824151-m02 --format={{.State.Status}}
	I0224 12:17:35.965899  934845 status.go:371] multinode-824151-m02 host status = "Running" (err=<nil>)
	I0224 12:17:35.965930  934845 host.go:66] Checking if "multinode-824151-m02" exists ...
	I0224 12:17:35.966258  934845 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-824151-m02
	I0224 12:17:35.982710  934845 host.go:66] Checking if "multinode-824151-m02" exists ...
	I0224 12:17:35.982952  934845 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0224 12:17:35.982989  934845 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-824151-m02
	I0224 12:17:35.999928  934845 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/20451-729451/.minikube/machines/multinode-824151-m02/id_rsa Username:docker}
	I0224 12:17:36.081936  934845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0224 12:17:36.092163  934845 status.go:176] multinode-824151-m02 status: &{Name:multinode-824151-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:17:36.092203  934845 status.go:174] checking status of multinode-824151-m03 ...
	I0224 12:17:36.092459  934845 cli_runner.go:164] Run: docker container inspect multinode-824151-m03 --format={{.State.Status}}
	I0224 12:17:36.109273  934845 status.go:371] multinode-824151-m03 host status = "Stopped" (err=<nil>)
	I0224 12:17:36.109305  934845 status.go:384] host is not running, skipping remaining checks
	I0224 12:17:36.109312  934845 status.go:176] multinode-824151-m03 status: &{Name:multinode-824151-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.06s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-824151 node start m03 -v=7 --alsologtostderr: (9.075965052s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-824151
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-824151
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-824151: (22.491672512s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-824151 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-824151 --wait=true -v=8 --alsologtostderr: (59.181405486s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-824151
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-824151 node delete m03: (4.354580857s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-824151 stop: (21.137449155s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-824151 status: exit status 7 (84.811166ms)

                                                
                                                
-- stdout --
	multinode-824151
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-824151-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-824151 status --alsologtostderr: exit status 7 (84.842236ms)

                                                
                                                
-- stdout --
	multinode-824151
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-824151-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0224 12:19:33.751949  950076 out.go:345] Setting OutFile to fd 1 ...
	I0224 12:19:33.752064  950076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:19:33.752078  950076 out.go:358] Setting ErrFile to fd 2...
	I0224 12:19:33.752090  950076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0224 12:19:33.752267  950076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20451-729451/.minikube/bin
	I0224 12:19:33.752422  950076 out.go:352] Setting JSON to false
	I0224 12:19:33.752447  950076 mustload.go:65] Loading cluster: multinode-824151
	I0224 12:19:33.752568  950076 notify.go:220] Checking for updates...
	I0224 12:19:33.752822  950076 config.go:182] Loaded profile config "multinode-824151": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0224 12:19:33.752850  950076 status.go:174] checking status of multinode-824151 ...
	I0224 12:19:33.753318  950076 cli_runner.go:164] Run: docker container inspect multinode-824151 --format={{.State.Status}}
	I0224 12:19:33.771243  950076 status.go:371] multinode-824151 host status = "Stopped" (err=<nil>)
	I0224 12:19:33.771295  950076 status.go:384] host is not running, skipping remaining checks
	I0224 12:19:33.771304  950076 status.go:176] multinode-824151 status: &{Name:multinode-824151 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0224 12:19:33.771369  950076 status.go:174] checking status of multinode-824151-m02 ...
	I0224 12:19:33.771754  950076 cli_runner.go:164] Run: docker container inspect multinode-824151-m02 --format={{.State.Status}}
	I0224 12:19:33.788504  950076 status.go:371] multinode-824151-m02 host status = "Stopped" (err=<nil>)
	I0224 12:19:33.788527  950076 status.go:384] host is not running, skipping remaining checks
	I0224 12:19:33.788537  950076 status.go:176] multinode-824151-m02 status: &{Name:multinode-824151-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (47.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-824151 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0224 12:20:04.364609  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-824151 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (46.896595659s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-824151 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (47.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-824151
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-824151-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-824151-m02 --driver=docker  --container-runtime=docker: exit status 14 (64.890441ms)

                                                
                                                
-- stdout --
	* [multinode-824151-m02] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-729451/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-729451/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-824151-m02' is duplicated with machine name 'multinode-824151-m02' in profile 'multinode-824151'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-824151-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-824151-m03 --driver=docker  --container-runtime=docker: (22.197490044s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-824151
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-824151: exit status 80 (263.451304ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-824151 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-824151-m03 already exists in multinode-824151-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-824151-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-824151-m03: (2.127976951s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.71s)

                                                
                                    
x
+
TestPreload (90.23s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-847230 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0224 12:21:20.758149  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-847230 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (58.832694956s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-847230 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-847230 image pull gcr.io/k8s-minikube/busybox: (1.320036808s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-847230
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-847230: (10.74793501s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-847230 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-847230 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (16.996114572s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-847230 image list
helpers_test.go:175: Cleaning up "test-preload-847230" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-847230
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-847230: (2.097935883s)
--- PASS: TestPreload (90.23s)

                                                
                                    
x
+
TestScheduledStopUnix (94.26s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-431799 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-431799 --memory=2048 --driver=docker  --container-runtime=docker: (21.364894514s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-431799 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-431799 -n scheduled-stop-431799
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-431799 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0224 12:22:41.815623  736216 retry.go:31] will retry after 99.273µs: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.816791  736216 retry.go:31] will retry after 194.476µs: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.817936  736216 retry.go:31] will retry after 269.535µs: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.819072  736216 retry.go:31] will retry after 381.755µs: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.820188  736216 retry.go:31] will retry after 645.445µs: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.821309  736216 retry.go:31] will retry after 800.145µs: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.822435  736216 retry.go:31] will retry after 1.26576ms: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.824684  736216 retry.go:31] will retry after 1.084126ms: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.826877  736216 retry.go:31] will retry after 3.242034ms: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.831092  736216 retry.go:31] will retry after 1.927951ms: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.833283  736216 retry.go:31] will retry after 6.510001ms: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.840491  736216 retry.go:31] will retry after 6.592993ms: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.847734  736216 retry.go:31] will retry after 6.945606ms: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.855259  736216 retry.go:31] will retry after 10.556021ms: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
I0224 12:22:41.866434  736216 retry.go:31] will retry after 39.801014ms: open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/scheduled-stop-431799/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-431799 --cancel-scheduled
E0224 12:22:43.827878  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-431799 -n scheduled-stop-431799
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-431799
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-431799 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-431799
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-431799: exit status 7 (68.298369ms)

                                                
                                                
-- stdout --
	scheduled-stop-431799
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-431799 -n scheduled-stop-431799
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-431799 -n scheduled-stop-431799: exit status 7 (67.460658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-431799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-431799
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-431799: (1.617818963s)
--- PASS: TestScheduledStopUnix (94.26s)

                                                
                                    
x
+
TestSkaffold (97.33s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3316473293 version
skaffold_test.go:63: skaffold version: v2.14.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-930053 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-930053 --memory=2600 --driver=docker  --container-runtime=docker: (23.86758689s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3316473293 run --minikube-profile skaffold-930053 --kube-context skaffold-930053 --status-check=true --port-forward=false --interactive=false
E0224 12:25:04.364559  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3316473293 run --minikube-profile skaffold-930053 --kube-context skaffold-930053 --status-check=true --port-forward=false --interactive=false: (58.875873491s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6fdfb95c4b-5tlxt" [4df7617c-f9dc-4769-8a4b-b55d768de8f2] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003822909s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6cb4889c5b-dz6k2" [1a48967d-9af6-4c1f-a740-e26911476ca7] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003079362s
helpers_test.go:175: Cleaning up "skaffold-930053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-930053
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-930053: (2.70602421s)
--- PASS: TestSkaffold (97.33s)

                                                
                                    
x
+
TestInsufficientStorage (12.53s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-324616 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-324616 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.417055341s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1414db34-7332-4388-b2d5-f85128708655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-324616] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5025e697-8f55-448b-8475-ada3a471eff5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20451"}}
	{"specversion":"1.0","id":"5126c06d-409d-4100-8f2a-086bc35d8f73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4cc6ef6b-2040-4b12-92d6-56206e0af1f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20451-729451/kubeconfig"}}
	{"specversion":"1.0","id":"e562e4d9-7052-402d-98f7-4d656b211f04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-729451/.minikube"}}
	{"specversion":"1.0","id":"4dc00c52-3289-44d4-aa56-fa4a89a69910","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2106f684-05f4-4e5e-a250-65eef2ccca0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"853cae9e-8941-4566-b2cf-b11dd76fc415","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"4aad53b3-be53-4f81-8031-1012fa416407","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"d1c6d9e4-9ae5-4e5f-9e74-d3a575f012ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9f5b337-2f42-4da1-a18d-50fa0bee715b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"82bd5a50-45a2-4b28-a34d-c5665d6c936c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-324616\" primary control-plane node in \"insufficient-storage-324616\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"68b556ec-4790-4987-ac04-de5e178dac52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.46-1740046583-20436 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a64b85a6-f117-41ae-a7e5-55f3e95d405a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"03687cbe-d66c-4883-9791-a333c01232fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-324616 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-324616 --output=json --layout=cluster: exit status 7 (250.746468ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-324616","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-324616","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 12:25:42.306954  989584 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-324616" does not appear in /home/jenkins/minikube-integration/20451-729451/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-324616 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-324616 --output=json --layout=cluster: exit status 7 (248.965238ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-324616","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-324616","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0224 12:25:42.556401  989683 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-324616" does not appear in /home/jenkins/minikube-integration/20451-729451/kubeconfig
	E0224 12:25:42.566056  989683 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/insufficient-storage-324616/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-324616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-324616
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-324616: (1.616570947s)
--- PASS: TestInsufficientStorage (12.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (102.61s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.723498932 start -p running-upgrade-203304 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.723498932 start -p running-upgrade-203304 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m16.285603378s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-203304 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-203304 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.761669457s)
helpers_test.go:175: Cleaning up "running-upgrade-203304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-203304
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-203304: (2.023809846s)
--- PASS: TestRunningBinaryUpgrade (102.61s)

                                                
                                    
x
+
TestKubernetesUpgrade (324.92s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-523310 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-523310 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.776706209s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-523310
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-523310: (1.201971572s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-523310 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-523310 status --format={{.Host}}: exit status 7 (77.289296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-523310 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-523310 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m28.312691775s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-523310 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-523310 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-523310 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (66.201479ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-523310] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-729451/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-729451/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-523310
	    minikube start -p kubernetes-upgrade-523310 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5233102 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-523310 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-523310 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-523310 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.399220048s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-523310" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-523310
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-523310: (3.028640038s)
--- PASS: TestKubernetesUpgrade (324.92s)

                                                
                                    
x
+
TestMissingContainerUpgrade (130.16s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.4043112553 start -p missing-upgrade-837336 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.4043112553 start -p missing-upgrade-837336 --memory=2200 --driver=docker  --container-runtime=docker: (1m9.282414145s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-837336
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-837336: (11.264236043s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-837336
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-837336 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-837336 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.926439472s)
helpers_test.go:175: Cleaning up "missing-upgrade-837336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-837336
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-837336: (2.0677686s)
--- PASS: TestMissingContainerUpgrade (130.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791364 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-791364 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (87.33182ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-791364] minikube v1.35.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20451
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20451-729451/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20451-729451/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791364 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-791364 --driver=docker  --container-runtime=docker: (32.742548616s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-791364 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791364 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-791364 --no-kubernetes --driver=docker  --container-runtime=docker: (16.988885773s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-791364 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-791364 status -o json: exit status 2 (364.694146ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-791364","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-791364
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-791364: (1.874444728s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791364 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-791364 --no-kubernetes --driver=docker  --container-runtime=docker: (9.065454503s)
--- PASS: TestNoKubernetes/serial/Start (9.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-791364 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-791364 "sudo systemctl is-active --quiet service kubelet": exit status 1 (296.38961ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.877008839s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (4.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-791364
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-791364: (1.196364779s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-791364 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-791364 --driver=docker  --container-runtime=docker: (9.188510983s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-791364 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-791364 "sudo systemctl is-active --quiet service kubelet": exit status 1 (318.333705ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (66.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.796726239 start -p stopped-upgrade-214076 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.796726239 start -p stopped-upgrade-214076 --memory=2200 --vm-driver=docker  --container-runtime=docker: (31.378078228s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.796726239 -p stopped-upgrade-214076 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.796726239 -p stopped-upgrade-214076 stop: (10.648741643s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-214076 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-214076 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.729618199s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (66.76s)

                                                
                                    
x
+
TestPause/serial/Start (38.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-168060 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-168060 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (38.835352073s)
--- PASS: TestPause/serial/Start (38.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-214076
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-214076: (1.077778455s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (63.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m3.858739714s)
--- PASS: TestNetworkPlugins/group/auto/Start (63.86s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.64s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-168060 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-168060 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.624273249s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (68.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0224 12:30:04.364210  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:30:17.922251  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:30:17.928615  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:30:17.939954  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:30:17.961240  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:30:18.003532  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:30:18.084933  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:30:18.246404  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:30:18.567792  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:30:19.209290  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:30:20.490712  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:30:23.052555  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m8.108841781s)
--- PASS: TestNetworkPlugins/group/false/Start (68.11s)

                                                
                                    
x
+
TestPause/serial/Pause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-168060 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.54s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-168060 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-168060 --output=json --layout=cluster: exit status 2 (292.689578ms)

                                                
                                                
-- stdout --
	{"Name":"pause-168060","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-168060","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.41s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-168060 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.41s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.59s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-168060 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.59s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.14s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-168060 --alsologtostderr -v=5
E0224 12:30:28.174468  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-168060 --alsologtostderr -v=5: (2.137331735s)
--- PASS: TestPause/serial/DeletePaused (2.14s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.67s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0224 12:30:38.416061  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.617638011s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-168060
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-168060: exit status 1 (16.592178ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-168060: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (60.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m0.30137091s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (60.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-705761 "pgrep -a kubelet"
I0224 12:30:52.127727  736216 config.go:182] Loaded profile config "auto-705761": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-705761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-h8m7s" [bd95ecc1-8517-419c-9318-6f768d42c5e8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-h8m7s" [bd95ecc1-8517-419c-9318-6f768d42c5e8] Running
E0224 12:30:58.897741  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003118318s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-705761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-705761 "pgrep -a kubelet"
I0224 12:31:10.231406  736216 config.go:182] Loaded profile config "false-705761": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-705761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-8v698" [935e9e5d-6d2e-4ff5-8f51-0213bed7bd28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-8v698" [935e9e5d-6d2e-4ff5-8f51-0213bed7bd28] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.003647234s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-705761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0224 12:31:20.758631  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (47.504266804s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0224 12:31:39.859012  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (59.795943679s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lfxlv" [a47dc068-8310-4395-8001-5f44c9bececc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003912863s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-705761 "pgrep -a kubelet"
I0224 12:31:52.076174  736216 config.go:182] Loaded profile config "kindnet-705761": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-705761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-rkbm5" [e894600b-b77d-4a68-bdcd-38453d2d4dca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-rkbm5" [e894600b-b77d-4a68-bdcd-38453d2d4dca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.00454357s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-705761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lprgb" [07d7f50b-e6b6-43a6-a837-58c6e69ad682] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004229758s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-705761 "pgrep -a kubelet"
I0224 12:32:14.456404  736216 config.go:182] Loaded profile config "flannel-705761": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-705761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-vhfwt" [06e649db-ce55-4016-acf7-1fe96007a9bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-vhfwt" [06e649db-ce55-4016-acf7-1fe96007a9bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003080546s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (65.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m5.687440892s)
--- PASS: TestNetworkPlugins/group/bridge/Start (65.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-705761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-705761 "pgrep -a kubelet"
I0224 12:32:38.728577  736216 config.go:182] Loaded profile config "enable-default-cni-705761": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-705761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-prlfs" [483b89b5-f83f-45c0-9be0-8b87f63c06e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-prlfs" [483b89b5-f83f-45c0-9be0-8b87f63c06e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003585654s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (69.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m9.651837638s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (69.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-705761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (42.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0224 12:33:01.781380  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (42.045358351s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (42.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (35.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-705761 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (35.487598038s)
--- PASS: TestNetworkPlugins/group/calico/Start (35.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-705761 "pgrep -a kubelet"
I0224 12:33:26.218973  736216 config.go:182] Loaded profile config "bridge-705761": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-705761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-zmfm8" [b98613cd-bb37-4bd3-8b8f-7d21e624d596] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-zmfm8" [b98613cd-bb37-4bd3-8b8f-7d21e624d596] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003567508s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-705761 "pgrep -a kubelet"
I0224 12:33:32.161133  736216 config.go:182] Loaded profile config "custom-flannel-705761": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-705761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-t7h66" [16e47c2e-aa59-4133-adee-18d8325172e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-t7h66" [16e47c2e-aa59-4133-adee-18d8325172e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00471658s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-705761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-705761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (19.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wzf9p" [e3ca459d-f865-4648-93f8-51a01854c214] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-wzf9p" [e3ca459d-f865-4648-93f8-51a01854c214] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-wzf9p" [e3ca459d-f865-4648-93f8-51a01854c214] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:344: "calico-node-wzf9p" [e3ca459d-f865-4648-93f8-51a01854c214] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 19.004135936s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (19.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-705761 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (8.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-705761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-5qj7j" [8c9845d5-6917-40b0-afa1-caffc9ddcb66] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-5qj7j" [8c9845d5-6917-40b0-afa1-caffc9ddcb66] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 8.005535571s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (8.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (128.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-769540 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-769540 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m8.618654045s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (128.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-705761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-705761 "pgrep -a kubelet"
I0224 12:34:03.958363  736216 config.go:182] Loaded profile config "calico-705761": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-705761 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-bsbzc" [90c33d17-f630-4d22-b44a-abf40341176a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-5d86dc444-bsbzc" [90c33d17-f630-4d22-b44a-abf40341176a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004351761s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-360561 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-360561 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (1m19.237751803s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-705761 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-705761 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)
E0224 12:37:26.736508  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:28.684876  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:32.371303  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:38.939905  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:38.946268  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:38.957603  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:38.978926  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:39.020277  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:39.101692  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:39.263205  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:39.584859  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:40.226299  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:41.507999  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:44.069335  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:49.166531  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:49.191156  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:59.433355  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:07.697879  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:19.914740  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:26.430395  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:26.436764  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:26.448122  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:26.469411  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:26.510735  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:26.592156  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:26.753681  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:27.075351  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:27.717420  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:28.999084  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:30.127825  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:31.560742  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:32.335103  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:32.341573  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:32.352887  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:32.374191  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:32.415506  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:32.496880  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:32.658367  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:32.980067  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:33.622375  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:34.904066  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:36.203438  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:36.682448  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:37.466417  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:42.588363  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:44.494544  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:44.500899  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:44.512294  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:44.533667  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:44.575063  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:44.656461  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:44.817991  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:45.139669  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:45.781089  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:46.924691  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:47.063068  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:49.624380  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:52.830086  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:54.292720  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:54.381340  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:54.387699  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:54.399031  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:54.420366  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:54.461696  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:54.543217  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:54.705101  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:54.746567  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:55.027248  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:55.669335  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:56.951100  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:38:59.512818  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:39:00.876575  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/enable-default-cni-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:39:04.634390  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:39:04.987995  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:39:07.406390  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:39:13.311404  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:39:14.876340  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:39:23.829982  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:39:25.469718  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:39:29.619242  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:39:35.358392  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:39:48.368216  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/bridge-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:39:52.049678  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:39:54.273658  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/custom-flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:40:04.364416  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:40:06.431442  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-402044 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-402044 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (1m7.432408567s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-598291 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0224 12:35:04.364035  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/addons-463362/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-598291 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (30.021921658s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-598291 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-598291 --alsologtostderr -v=3
E0224 12:35:17.922377  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-598291 --alsologtostderr -v=3: (10.824263209s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-598291 -n newest-cni-598291
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-598291 -n newest-cni-598291: exit status 7 (71.907641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-598291 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-598291 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-598291 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (13.854698625s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-598291 -n newest-cni-598291
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-360561 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fe936885-0c97-4aca-a80a-38fa25fa025b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fe936885-0c97-4aca-a80a-38fa25fa025b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003780025s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-360561 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-402044 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ce639956-45fd-4098-aeeb-d7d5f657bd03] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ce639956-45fd-4098-aeeb-d7d5f657bd03] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003843475s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-402044 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-360561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-360561 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-598291 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-598291 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-598291 -n newest-cni-598291
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-598291 -n newest-cni-598291: exit status 2 (278.891336ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-598291 -n newest-cni-598291
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-598291 -n newest-cni-598291: exit status 2 (290.667481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-598291 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-598291 -n newest-cni-598291
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-598291 -n newest-cni-598291
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-360561 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-360561 --alsologtostderr -v=3: (10.758690207s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (62.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-481649 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-481649 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (1m2.653421647s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (62.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-402044 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-402044 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-402044 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-402044 --alsologtostderr -v=3: (10.807017315s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-360561 -n no-preload-360561
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-360561 -n no-preload-360561: exit status 7 (159.343826ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-360561 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-360561 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0224 12:35:45.623703  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-360561 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m22.601633025s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-360561 -n no-preload-360561
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-402044 -n default-k8s-diff-port-402044
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-402044 -n default-k8s-diff-port-402044: exit status 7 (151.599608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-402044 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (286.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-402044 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0224 12:35:52.340821  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:35:52.347178  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:35:52.358552  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:35:52.380795  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:35:52.422411  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:35:52.503707  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:35:52.668004  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:35:52.989471  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:35:53.631539  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:35:54.913637  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:35:57.474986  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:02.597126  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-402044 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m46.493525959s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-402044 -n default-k8s-diff-port-402044
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (286.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-769540 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a619cdea-4c7a-4eda-970d-781304ab5862] Pending
helpers_test.go:344: "busybox" [a619cdea-4c7a-4eda-970d-781304ab5862] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a619cdea-4c7a-4eda-970d-781304ab5862] Running
E0224 12:36:10.432702  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:10.439091  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:10.450452  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:10.471817  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:10.513260  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:10.594710  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:10.756434  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:11.078085  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:11.720138  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:12.838922  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:13.002388  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003733962s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-769540 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-769540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-769540 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-769540 --alsologtostderr -v=3
E0224 12:36:15.564394  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:20.685916  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:20.758595  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-769540 --alsologtostderr -v=3: (10.861442615s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-769540 -n old-k8s-version-769540
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-769540 -n old-k8s-version-769540: exit status 7 (88.993325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-769540 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (23.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-769540 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0224 12:36:30.927988  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:33.320694  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-769540 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (22.936547688s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-769540 -n old-k8s-version-769540
E0224 12:36:48.328645  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (23.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-481649 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ec613fd2-d65a-42df-bd8f-2f7ab5c6015a] Pending
helpers_test.go:344: "busybox" [ec613fd2-d65a-42df-bd8f-2f7ab5c6015a] Running
E0224 12:36:45.758798  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:45.765119  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:45.777399  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:45.798848  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:45.840300  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:45.921932  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:46.083376  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:46.404744  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:36:47.046437  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.002702479s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-481649 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (24.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0224 12:36:50.890710  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fkjcd" [66d915fb-909b-4a91-81a9-a3ed6c69299a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fkjcd" [66d915fb-909b-4a91-81a9-a3ed6c69299a] Running
E0224 12:37:08.190342  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:08.196686  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:08.208033  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:08.229425  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:08.270801  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:08.352255  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:08.513561  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:08.835134  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:09.477045  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:10.758941  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 24.004043614s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (24.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-481649 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0224 12:36:51.409709  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-481649 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-481649 --alsologtostderr -v=3
E0224 12:36:56.012869  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-481649 --alsologtostderr -v=3: (10.943839847s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-481649 -n embed-certs-481649
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-481649 -n embed-certs-481649: exit status 7 (88.665045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-481649 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-481649 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2
E0224 12:37:06.254644  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kindnet-705761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-481649 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.2: (4m21.736819474s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-481649 -n embed-certs-481649
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fkjcd" [66d915fb-909b-4a91-81a9-a3ed6c69299a] Running
E0224 12:37:13.320886  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:37:14.282087  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/auto-705761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00390485s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-769540 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-769540 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-769540 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-769540 -n old-k8s-version-769540
E0224 12:37:18.443078  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/flannel-705761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-769540 -n old-k8s-version-769540: exit status 2 (311.890314ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-769540 -n old-k8s-version-769540
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-769540 -n old-k8s-version-769540: exit status 2 (297.887415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-769540 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-769540 -n old-k8s-version-769540
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-769540 -n old-k8s-version-769540
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zkc88" [31904d2d-6eb7-4f61-a79a-2286eb0582ec] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002838004s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-zkc88" [31904d2d-6eb7-4f61-a79a-2286eb0582ec] Running
E0224 12:40:16.319711  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:40:17.922068  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/skaffold-930053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003470121s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-360561 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-360561 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-360561 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-360561 -n no-preload-360561
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-360561 -n no-preload-360561: exit status 2 (291.561878ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-360561 -n no-preload-360561
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-360561 -n no-preload-360561: exit status 2 (278.38455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-360561 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-360561 -n no-preload-360561
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-360561 -n no-preload-360561
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8fnsg" [7272a63d-52b3-4c00-a757-053cd655a55b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003495409s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-8fnsg" [7272a63d-52b3-4c00-a757-053cd655a55b] Running
E0224 12:40:43.905963  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/no-preload-360561/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003065273s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-402044 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-402044 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-402044 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-402044 -n default-k8s-diff-port-402044
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-402044 -n default-k8s-diff-port-402044: exit status 2 (272.760753ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-402044 -n default-k8s-diff-port-402044
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-402044 -n default-k8s-diff-port-402044: exit status 2 (274.098123ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-402044 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-402044 -n default-k8s-diff-port-402044
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-402044 -n default-k8s-diff-port-402044
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xchcw" [073204a0-5b2b-4a40-8b28-b634f1b60a87] Running
E0224 12:41:25.908151  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/old-k8s-version-769540/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:41:28.353553  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/calico-705761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002789527s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-7779f9b69b-xchcw" [073204a0-5b2b-4a40-8b28-b634f1b60a87] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003489151s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-481649 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-481649 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-481649 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-481649 -n embed-certs-481649
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-481649 -n embed-certs-481649: exit status 2 (272.89881ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-481649 -n embed-certs-481649
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-481649 -n embed-certs-481649: exit status 2 (270.851061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-481649 --alsologtostderr -v=1
E0224 12:41:38.134160  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/false-705761/client.crt: no such file or directory" logger="UnhandledError"
E0224 12:41:38.241714  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/kubenet-705761/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-481649 -n embed-certs-481649
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-481649 -n embed-certs-481649
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.26s)

                                                
                                    

Test skip (22/346)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E0224 12:26:20.759089  736216 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/functional-706877/client.crt: no such file or directory" logger="UnhandledError"
panic.go:629: 
----------------------- debugLogs start: cilium-705761 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-705761" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20451-729451/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Feb 2025 12:26:15 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-791364
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20451-729451/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Feb 2025 12:26:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: cluster_info
server: https://192.168.94.2:8443
name: offline-docker-772723
contexts:
- context:
cluster: NoKubernetes-791364
extensions:
- extension:
last-update: Mon, 24 Feb 2025 12:26:15 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: NoKubernetes-791364
name: NoKubernetes-791364
- context:
cluster: offline-docker-772723
extensions:
- extension:
last-update: Mon, 24 Feb 2025 12:26:17 UTC
provider: minikube.sigs.k8s.io
version: v1.35.0
name: context_info
namespace: default
user: offline-docker-772723
name: offline-docker-772723
current-context: offline-docker-772723
kind: Config
preferences: {}
users:
- name: NoKubernetes-791364
user:
client-certificate: /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/NoKubernetes-791364/client.crt
client-key: /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/NoKubernetes-791364/client.key
- name: offline-docker-772723
user:
client-certificate: /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/offline-docker-772723/client.crt
client-key: /home/jenkins/minikube-integration/20451-729451/.minikube/profiles/offline-docker-772723/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-705761

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-705761" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-705761"

                                                
                                                
----------------------- debugLogs end: cilium-705761 [took: 3.348111814s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-705761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-705761
--- SKIP: TestNetworkPlugins/group/cilium (3.50s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-676214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-676214
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard