Test Report: Docker_Linux_docker_arm64 19672

                    
                      d6d2a37830b251a8a712eec07ee86a534797346d:2024-09-20:36302
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 75
x
+
TestAddons/parallel/Registry (75s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.183393ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-zscw5" [1b3b7fa1-d8ab-4ede-bf68-c006a0d0c180] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00379447s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-swlk5" [8167ccc6-dd33-4103-903e-3eed5bbba124] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004396061s
addons_test.go:338: (dbg) Run:  kubectl --context addons-860203 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-860203 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-860203 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.156772357s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-860203 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 ip
2024/09/20 22:24:20 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-860203
helpers_test.go:235: (dbg) docker inspect addons-860203:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83c4790b0fa589294a61c6b5d0a91589634ad921605ef915abaa0be07212d47c",
	        "Created": "2024-09-20T22:11:06.961193498Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1437740,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T22:11:07.082473981Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c94982da1293baee77c00993711af197ed62d6b1a4ee12c0caa4f57c70de4fdc",
	        "ResolvConfPath": "/var/lib/docker/containers/83c4790b0fa589294a61c6b5d0a91589634ad921605ef915abaa0be07212d47c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83c4790b0fa589294a61c6b5d0a91589634ad921605ef915abaa0be07212d47c/hostname",
	        "HostsPath": "/var/lib/docker/containers/83c4790b0fa589294a61c6b5d0a91589634ad921605ef915abaa0be07212d47c/hosts",
	        "LogPath": "/var/lib/docker/containers/83c4790b0fa589294a61c6b5d0a91589634ad921605ef915abaa0be07212d47c/83c4790b0fa589294a61c6b5d0a91589634ad921605ef915abaa0be07212d47c-json.log",
	        "Name": "/addons-860203",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-860203:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-860203",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d332e1eb40bf7b73aefb7c0588aa883bb8a0d1e70263cef1ecf4c0b202772a91-init/diff:/var/lib/docker/overlay2/d4e86102177f5c473b4485deac53c850f244ada82d41fc536b2bc92ae7aec33d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d332e1eb40bf7b73aefb7c0588aa883bb8a0d1e70263cef1ecf4c0b202772a91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d332e1eb40bf7b73aefb7c0588aa883bb8a0d1e70263cef1ecf4c0b202772a91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d332e1eb40bf7b73aefb7c0588aa883bb8a0d1e70263cef1ecf4c0b202772a91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-860203",
	                "Source": "/var/lib/docker/volumes/addons-860203/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-860203",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-860203",
	                "name.minikube.sigs.k8s.io": "addons-860203",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "36cfefdc5f846ea096aa610a778402052a1234d6ca4185e7c7cd29cfc1d775fc",
	            "SandboxKey": "/var/run/docker/netns/36cfefdc5f84",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33530"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33531"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33532"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-860203": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "77b8e80488227421a5f20a4663748c50555b3e130ab543d6d57d305c9d065191",
	                    "EndpointID": "8604a6091ba78bb155e8f9c6ec78e9b12bdb41547ad8d4e752dfad9c0a5e9174",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-860203",
	                        "83c4790b0fa5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-860203 -n addons-860203
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-860203 logs -n 25: (1.520791375s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-837951   | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | -p download-only-837951              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| delete  | -p download-only-837951              | download-only-837951   | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| start   | -o=json --download-only              | download-only-363766   | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | -p download-only-363766              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| delete  | -p download-only-363766              | download-only-363766   | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| delete  | -p download-only-837951              | download-only-837951   | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| delete  | -p download-only-363766              | download-only-363766   | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| start   | --download-only -p                   | download-docker-030597 | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | download-docker-030597               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-030597            | download-docker-030597 | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| start   | --download-only -p                   | binary-mirror-911328   | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | binary-mirror-911328                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45811               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-911328              | binary-mirror-911328   | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| addons  | enable dashboard -p                  | addons-860203          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | addons-860203                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-860203          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | addons-860203                        |                        |         |         |                     |                     |
	| start   | -p addons-860203 --wait=true         | addons-860203          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:14 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-860203 addons disable         | addons-860203          | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:15 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-860203          | jenkins | v1.34.0 | 20 Sep 24 22:23 UTC | 20 Sep 24 22:23 UTC |
	|         | -p addons-860203                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-860203 addons disable         | addons-860203          | jenkins | v1.34.0 | 20 Sep 24 22:23 UTC | 20 Sep 24 22:23 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-860203 addons                 | addons-860203          | jenkins | v1.34.0 | 20 Sep 24 22:23 UTC | 20 Sep 24 22:23 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-860203 addons                 | addons-860203          | jenkins | v1.34.0 | 20 Sep 24 22:23 UTC | 20 Sep 24 22:24 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-860203 addons                 | addons-860203          | jenkins | v1.34.0 | 20 Sep 24 22:24 UTC | 20 Sep 24 22:24 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-860203          | jenkins | v1.34.0 | 20 Sep 24 22:24 UTC | 20 Sep 24 22:24 UTC |
	|         | addons-860203                        |                        |         |         |                     |                     |
	| ip      | addons-860203 ip                     | addons-860203          | jenkins | v1.34.0 | 20 Sep 24 22:24 UTC | 20 Sep 24 22:24 UTC |
	| addons  | addons-860203 addons disable         | addons-860203          | jenkins | v1.34.0 | 20 Sep 24 22:24 UTC | 20 Sep 24 22:24 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 22:10:42
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 22:10:42.715592 1437250 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:10:42.715746 1437250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:10:42.715756 1437250 out.go:358] Setting ErrFile to fd 2...
	I0920 22:10:42.715762 1437250 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:10:42.716038 1437250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
	I0920 22:10:42.716555 1437250 out.go:352] Setting JSON to false
	I0920 22:10:42.717419 1437250 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21194,"bootTime":1726849049,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0920 22:10:42.717494 1437250 start.go:139] virtualization:  
	I0920 22:10:42.719426 1437250 out.go:177] * [addons-860203] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 22:10:42.721156 1437250 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:10:42.721219 1437250 notify.go:220] Checking for updates...
	I0920 22:10:42.723588 1437250 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:10:42.725369 1437250 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-1431110/kubeconfig
	I0920 22:10:42.726530 1437250 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-1431110/.minikube
	I0920 22:10:42.727597 1437250 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 22:10:42.728780 1437250 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:10:42.730429 1437250 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:10:42.756251 1437250 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 22:10:42.756433 1437250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 22:10:42.818380 1437250 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 22:10:42.808309287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 22:10:42.818489 1437250 docker.go:318] overlay module found
	I0920 22:10:42.820347 1437250 out.go:177] * Using the docker driver based on user configuration
	I0920 22:10:42.821457 1437250 start.go:297] selected driver: docker
	I0920 22:10:42.821482 1437250 start.go:901] validating driver "docker" against <nil>
	I0920 22:10:42.821497 1437250 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:10:42.822208 1437250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 22:10:42.880305 1437250 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 22:10:42.869143234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 22:10:42.880564 1437250 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 22:10:42.880815 1437250 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:10:42.882020 1437250 out.go:177] * Using Docker driver with root privileges
	I0920 22:10:42.883267 1437250 cni.go:84] Creating CNI manager for ""
	I0920 22:10:42.883348 1437250 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 22:10:42.883365 1437250 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 22:10:42.883463 1437250 start.go:340] cluster config:
	{Name:addons-860203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-860203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:10:42.884875 1437250 out.go:177] * Starting "addons-860203" primary control-plane node in "addons-860203" cluster
	I0920 22:10:42.886090 1437250 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 22:10:42.887265 1437250 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0920 22:10:42.889407 1437250 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 22:10:42.889501 1437250 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-1431110/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 22:10:42.889575 1437250 cache.go:56] Caching tarball of preloaded images
	I0920 22:10:42.889691 1437250 preload.go:172] Found /home/jenkins/minikube-integration/19672-1431110/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 22:10:42.889707 1437250 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 22:10:42.890060 1437250 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/config.json ...
	I0920 22:10:42.890087 1437250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/config.json: {Name:mk6ccbbaef35aa77ef4de002191730411b1badd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:10:42.889502 1437250 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 22:10:42.905181 1437250 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 22:10:42.905303 1437250 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 22:10:42.905330 1437250 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 22:10:42.905338 1437250 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 22:10:42.905346 1437250 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 22:10:42.905358 1437250 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0920 22:11:00.024434 1437250 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0920 22:11:00.024491 1437250 cache.go:194] Successfully downloaded all kic artifacts
	I0920 22:11:00.024530 1437250 start.go:360] acquireMachinesLock for addons-860203: {Name:mk7109d4a29303e3b5f931bafed363a8e79eb129 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:11:00.024672 1437250 start.go:364] duration metric: took 121.204µs to acquireMachinesLock for "addons-860203"
	I0920 22:11:00.024705 1437250 start.go:93] Provisioning new machine with config: &{Name:addons-860203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-860203 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 22:11:00.024787 1437250 start.go:125] createHost starting for "" (driver="docker")
	I0920 22:11:00.032863 1437250 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 22:11:00.033167 1437250 start.go:159] libmachine.API.Create for "addons-860203" (driver="docker")
	I0920 22:11:00.033212 1437250 client.go:168] LocalClient.Create starting
	I0920 22:11:00.033381 1437250 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-1431110/.minikube/certs/ca.pem
	I0920 22:11:00.258734 1437250 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-1431110/.minikube/certs/cert.pem
	I0920 22:11:01.066970 1437250 cli_runner.go:164] Run: docker network inspect addons-860203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 22:11:01.083158 1437250 cli_runner.go:211] docker network inspect addons-860203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 22:11:01.083252 1437250 network_create.go:284] running [docker network inspect addons-860203] to gather additional debugging logs...
	I0920 22:11:01.083270 1437250 cli_runner.go:164] Run: docker network inspect addons-860203
	W0920 22:11:01.100293 1437250 cli_runner.go:211] docker network inspect addons-860203 returned with exit code 1
	I0920 22:11:01.100330 1437250 network_create.go:287] error running [docker network inspect addons-860203]: docker network inspect addons-860203: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-860203 not found
	I0920 22:11:01.100345 1437250 network_create.go:289] output of [docker network inspect addons-860203]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-860203 not found
	
	** /stderr **
	I0920 22:11:01.100449 1437250 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 22:11:01.117945 1437250 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017a5200}
	I0920 22:11:01.117991 1437250 network_create.go:124] attempt to create docker network addons-860203 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 22:11:01.118057 1437250 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-860203 addons-860203
	I0920 22:11:01.192633 1437250 network_create.go:108] docker network addons-860203 192.168.49.0/24 created
	I0920 22:11:01.192667 1437250 kic.go:121] calculated static IP "192.168.49.2" for the "addons-860203" container
	I0920 22:11:01.192756 1437250 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 22:11:01.209057 1437250 cli_runner.go:164] Run: docker volume create addons-860203 --label name.minikube.sigs.k8s.io=addons-860203 --label created_by.minikube.sigs.k8s.io=true
	I0920 22:11:01.226807 1437250 oci.go:103] Successfully created a docker volume addons-860203
	I0920 22:11:01.226917 1437250 cli_runner.go:164] Run: docker run --rm --name addons-860203-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-860203 --entrypoint /usr/bin/test -v addons-860203:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0920 22:11:03.250488 1437250 cli_runner.go:217] Completed: docker run --rm --name addons-860203-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-860203 --entrypoint /usr/bin/test -v addons-860203:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (2.023522916s)
	I0920 22:11:03.250522 1437250 oci.go:107] Successfully prepared a docker volume addons-860203
	I0920 22:11:03.250547 1437250 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 22:11:03.250568 1437250 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 22:11:03.250644 1437250 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-1431110/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-860203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 22:11:06.894227 1437250 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19672-1431110/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-860203:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (3.643527962s)
	I0920 22:11:06.894259 1437250 kic.go:203] duration metric: took 3.643689124s to extract preloaded images to volume ...
	W0920 22:11:06.894393 1437250 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 22:11:06.894498 1437250 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 22:11:06.946813 1437250 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-860203 --name addons-860203 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-860203 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-860203 --network addons-860203 --ip 192.168.49.2 --volume addons-860203:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0920 22:11:07.264192 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Running}}
	I0920 22:11:07.288256 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:07.312293 1437250 cli_runner.go:164] Run: docker exec addons-860203 stat /var/lib/dpkg/alternatives/iptables
	I0920 22:11:07.381179 1437250 oci.go:144] the created container "addons-860203" has a running status.
	I0920 22:11:07.381214 1437250 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa...
	I0920 22:11:09.103445 1437250 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 22:11:09.122294 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:09.138425 1437250 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 22:11:09.138449 1437250 kic_runner.go:114] Args: [docker exec --privileged addons-860203 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 22:11:09.195393 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:09.211907 1437250 machine.go:93] provisionDockerMachine start ...
	I0920 22:11:09.212014 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:09.228947 1437250 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:09.229215 1437250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I0920 22:11:09.229232 1437250 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:11:09.367568 1437250 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-860203
	
	I0920 22:11:09.367637 1437250 ubuntu.go:169] provisioning hostname "addons-860203"
	I0920 22:11:09.367722 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:09.384596 1437250 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:09.384847 1437250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I0920 22:11:09.384864 1437250 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-860203 && echo "addons-860203" | sudo tee /etc/hostname
	I0920 22:11:09.528153 1437250 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-860203
	
	I0920 22:11:09.528298 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:09.546105 1437250 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:09.546345 1437250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I0920 22:11:09.546371 1437250 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-860203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-860203/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-860203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:11:09.680446 1437250 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:11:09.680518 1437250 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19672-1431110/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-1431110/.minikube}
	I0920 22:11:09.680548 1437250 ubuntu.go:177] setting up certificates
	I0920 22:11:09.680558 1437250 provision.go:84] configureAuth start
	I0920 22:11:09.680634 1437250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-860203
	I0920 22:11:09.699236 1437250 provision.go:143] copyHostCerts
	I0920 22:11:09.699323 1437250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-1431110/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-1431110/.minikube/ca.pem (1078 bytes)
	I0920 22:11:09.699443 1437250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-1431110/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-1431110/.minikube/cert.pem (1123 bytes)
	I0920 22:11:09.699507 1437250 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-1431110/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-1431110/.minikube/key.pem (1679 bytes)
	I0920 22:11:09.699561 1437250 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-1431110/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-1431110/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-1431110/.minikube/certs/ca-key.pem org=jenkins.addons-860203 san=[127.0.0.1 192.168.49.2 addons-860203 localhost minikube]
	I0920 22:11:10.003164 1437250 provision.go:177] copyRemoteCerts
	I0920 22:11:10.003243 1437250 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:11:10.003289 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:10.033625 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:10.134222 1437250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-1431110/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 22:11:10.161402 1437250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-1431110/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:11:10.187560 1437250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-1431110/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:11:10.212964 1437250 provision.go:87] duration metric: took 532.391676ms to configureAuth
	I0920 22:11:10.213045 1437250 ubuntu.go:193] setting minikube options for container-runtime
	I0920 22:11:10.213248 1437250 config.go:182] Loaded profile config "addons-860203": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 22:11:10.213309 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:10.241600 1437250 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:10.241856 1437250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I0920 22:11:10.241876 1437250 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 22:11:10.376607 1437250 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0920 22:11:10.376628 1437250 ubuntu.go:71] root file system type: overlay
	I0920 22:11:10.376771 1437250 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 22:11:10.376847 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:10.394139 1437250 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:10.394388 1437250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I0920 22:11:10.394470 1437250 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 22:11:10.540963 1437250 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 22:11:10.541053 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:10.560013 1437250 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:10.560289 1437250 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33530 <nil> <nil>}
	I0920 22:11:10.560317 1437250 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 22:11:11.342485 1437250 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-19 14:24:16.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-20 22:11:10.532899678 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0920 22:11:11.342564 1437250 machine.go:96] duration metric: took 2.130635372s to provisionDockerMachine
	I0920 22:11:11.342591 1437250 client.go:171] duration metric: took 11.309372402s to LocalClient.Create
	I0920 22:11:11.342643 1437250 start.go:167] duration metric: took 11.309474521s to libmachine.API.Create "addons-860203"
	I0920 22:11:11.342657 1437250 start.go:293] postStartSetup for "addons-860203" (driver="docker")
	I0920 22:11:11.342669 1437250 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:11:11.342740 1437250 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:11:11.342783 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:11.359323 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:11.453317 1437250 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:11:11.456608 1437250 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 22:11:11.456648 1437250 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 22:11:11.456676 1437250 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 22:11:11.456690 1437250 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 22:11:11.456701 1437250 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-1431110/.minikube/addons for local assets ...
	I0920 22:11:11.456788 1437250 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-1431110/.minikube/files for local assets ...
	I0920 22:11:11.456815 1437250 start.go:296] duration metric: took 114.151495ms for postStartSetup
	I0920 22:11:11.457125 1437250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-860203
	I0920 22:11:11.473312 1437250 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/config.json ...
	I0920 22:11:11.473609 1437250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 22:11:11.473660 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:11.489981 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:11.581135 1437250 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 22:11:11.585640 1437250 start.go:128] duration metric: took 11.560836705s to createHost
	I0920 22:11:11.585674 1437250 start.go:83] releasing machines lock for "addons-860203", held for 11.560991294s
	I0920 22:11:11.585780 1437250 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-860203
	I0920 22:11:11.601869 1437250 ssh_runner.go:195] Run: cat /version.json
	I0920 22:11:11.601926 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:11.602177 1437250 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:11:11.602247 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:11.627690 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:11.632186 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:11.852232 1437250 ssh_runner.go:195] Run: systemctl --version
	I0920 22:11:11.856480 1437250 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 22:11:11.860708 1437250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 22:11:11.887251 1437250 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 22:11:11.887331 1437250 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:11:11.916220 1437250 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 22:11:11.916294 1437250 start.go:495] detecting cgroup driver to use...
	I0920 22:11:11.916338 1437250 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 22:11:11.916457 1437250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:11:11.934094 1437250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 22:11:11.944569 1437250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 22:11:11.954978 1437250 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 22:11:11.955098 1437250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 22:11:11.965663 1437250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 22:11:11.975954 1437250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 22:11:11.986463 1437250 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 22:11:11.996646 1437250 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:11:12.008925 1437250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 22:11:12.019840 1437250 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 22:11:12.030149 1437250 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 22:11:12.040757 1437250 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:11:12.049681 1437250 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:11:12.059138 1437250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:11:12.148336 1437250 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 22:11:12.246482 1437250 start.go:495] detecting cgroup driver to use...
	I0920 22:11:12.246537 1437250 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 22:11:12.246615 1437250 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 22:11:12.261829 1437250 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0920 22:11:12.261986 1437250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 22:11:12.274793 1437250 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:11:12.292444 1437250 ssh_runner.go:195] Run: which cri-dockerd
	I0920 22:11:12.296390 1437250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 22:11:12.305676 1437250 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 22:11:12.330218 1437250 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 22:11:12.434223 1437250 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 22:11:12.533941 1437250 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 22:11:12.534085 1437250 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 22:11:12.556948 1437250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:11:12.663164 1437250 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 22:11:12.923639 1437250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 22:11:12.936029 1437250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 22:11:12.950274 1437250 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 22:11:13.046602 1437250 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 22:11:13.138345 1437250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:11:13.227119 1437250 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 22:11:13.241407 1437250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 22:11:13.253111 1437250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:11:13.353669 1437250 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 22:11:13.423900 1437250 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 22:11:13.424049 1437250 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 22:11:13.429378 1437250 start.go:563] Will wait 60s for crictl version
	I0920 22:11:13.429474 1437250 ssh_runner.go:195] Run: which crictl
	I0920 22:11:13.433274 1437250 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:11:13.477957 1437250 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0920 22:11:13.478079 1437250 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 22:11:13.499700 1437250 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 22:11:13.529681 1437250 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0920 22:11:13.529826 1437250 cli_runner.go:164] Run: docker network inspect addons-860203 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 22:11:13.545768 1437250 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 22:11:13.549513 1437250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:11:13.560613 1437250 kubeadm.go:883] updating cluster {Name:addons-860203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-860203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:11:13.560732 1437250 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 22:11:13.560790 1437250 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 22:11:13.579457 1437250 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 22:11:13.579477 1437250 docker.go:615] Images already preloaded, skipping extraction
	I0920 22:11:13.579538 1437250 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 22:11:13.598294 1437250 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 22:11:13.598315 1437250 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:11:13.598326 1437250 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0920 22:11:13.598418 1437250 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-860203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-860203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:11:13.598485 1437250 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 22:11:13.640169 1437250 cni.go:84] Creating CNI manager for ""
	I0920 22:11:13.640199 1437250 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 22:11:13.640209 1437250 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:11:13.640229 1437250 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-860203 NodeName:addons-860203 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:11:13.640377 1437250 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-860203"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:11:13.640449 1437250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:11:13.649222 1437250 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:11:13.649294 1437250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:11:13.658087 1437250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 22:11:13.676733 1437250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:11:13.696440 1437250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0920 22:11:13.714812 1437250 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 22:11:13.718195 1437250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:11:13.728673 1437250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:11:13.818841 1437250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:11:13.834028 1437250 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203 for IP: 192.168.49.2
	I0920 22:11:13.834092 1437250 certs.go:194] generating shared ca certs ...
	I0920 22:11:13.834122 1437250 certs.go:226] acquiring lock for ca certs: {Name:mkb63a23b6c1705f7f8da5dc3b2062b12902193f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:13.834281 1437250 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-1431110/.minikube/ca.key
	I0920 22:11:14.684911 1437250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-1431110/.minikube/ca.crt ...
	I0920 22:11:14.684944 1437250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/.minikube/ca.crt: {Name:mk0665ad6b5ffe54f7121f590dfc8a84b33d9d6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:14.685760 1437250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-1431110/.minikube/ca.key ...
	I0920 22:11:14.685777 1437250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/.minikube/ca.key: {Name:mk0090fa3e4a5d91e019eb2318a1087fe16caa5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:14.686487 1437250 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-1431110/.minikube/proxy-client-ca.key
	I0920 22:11:14.802951 1437250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-1431110/.minikube/proxy-client-ca.crt ...
	I0920 22:11:14.802978 1437250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/.minikube/proxy-client-ca.crt: {Name:mkb03a292981a629db2acba8bc080725787582a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:14.803151 1437250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-1431110/.minikube/proxy-client-ca.key ...
	I0920 22:11:14.803162 1437250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/.minikube/proxy-client-ca.key: {Name:mk058371e409208c0eb491681b0174cde37b9cb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:14.803800 1437250 certs.go:256] generating profile certs ...
	I0920 22:11:14.803867 1437250 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.key
	I0920 22:11:14.803892 1437250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt with IP's: []
	I0920 22:11:15.122897 1437250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt ...
	I0920 22:11:15.122940 1437250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: {Name:mka52b51293bcec35602fb5706334454d0fcf55a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:15.123716 1437250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.key ...
	I0920 22:11:15.123733 1437250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.key: {Name:mkb883981f116448ecb8f763ea5789f81f3a39de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:15.123822 1437250 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/apiserver.key.9c71775c
	I0920 22:11:15.123842 1437250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/apiserver.crt.9c71775c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 22:11:15.299225 1437250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/apiserver.crt.9c71775c ...
	I0920 22:11:15.299260 1437250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/apiserver.crt.9c71775c: {Name:mk914307924a4c6a3ccbae66616ae5437ddb53e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:15.299473 1437250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/apiserver.key.9c71775c ...
	I0920 22:11:15.299489 1437250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/apiserver.key.9c71775c: {Name:mk71be1a5d2becf8bc44256363e109e3ef71a577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:15.300200 1437250 certs.go:381] copying /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/apiserver.crt.9c71775c -> /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/apiserver.crt
	I0920 22:11:15.300303 1437250 certs.go:385] copying /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/apiserver.key.9c71775c -> /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/apiserver.key
	I0920 22:11:15.300366 1437250 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/proxy-client.key
	I0920 22:11:15.300389 1437250 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/proxy-client.crt with IP's: []
	I0920 22:11:15.501045 1437250 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/proxy-client.crt ...
	I0920 22:11:15.501080 1437250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/proxy-client.crt: {Name:mk9a31a706580984f3f3f3d5f8ffc6f48dd2d2d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:15.501276 1437250 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/proxy-client.key ...
	I0920 22:11:15.501291 1437250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/proxy-client.key: {Name:mk6c05fd7df0c3c5d4b46baae472c4132ec0b8ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:15.502050 1437250 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-1431110/.minikube/certs/ca-key.pem (1679 bytes)
	I0920 22:11:15.502099 1437250 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-1431110/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:11:15.502131 1437250 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-1431110/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:11:15.502160 1437250 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-1431110/.minikube/certs/key.pem (1679 bytes)
	I0920 22:11:15.502768 1437250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-1431110/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:11:15.527564 1437250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-1431110/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:11:15.553087 1437250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-1431110/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:11:15.578695 1437250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-1431110/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:11:15.602489 1437250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 22:11:15.626719 1437250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:11:15.650874 1437250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:11:15.675048 1437250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:11:15.699207 1437250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-1431110/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:11:15.723266 1437250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:11:15.741419 1437250 ssh_runner.go:195] Run: openssl version
	I0920 22:11:15.747178 1437250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:11:15.756789 1437250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:11:15.760196 1437250 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 22:11 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:11:15.760265 1437250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:11:15.767307 1437250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:11:15.777212 1437250 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:11:15.780570 1437250 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 22:11:15.780616 1437250 kubeadm.go:392] StartCluster: {Name:addons-860203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-860203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:11:15.780749 1437250 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 22:11:15.798175 1437250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:11:15.807410 1437250 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:11:15.816471 1437250 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 22:11:15.816536 1437250 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:11:15.827573 1437250 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:11:15.827640 1437250 kubeadm.go:157] found existing configuration files:
	
	I0920 22:11:15.827718 1437250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:11:15.836691 1437250 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:11:15.836782 1437250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:11:15.845585 1437250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:11:15.855732 1437250 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:11:15.855800 1437250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:11:15.864418 1437250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:11:15.873356 1437250 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:11:15.873447 1437250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:11:15.882026 1437250 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:11:15.890903 1437250 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:11:15.890971 1437250 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:11:15.899538 1437250 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 22:11:15.940099 1437250 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:11:15.940166 1437250 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:11:15.975453 1437250 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 22:11:15.975528 1437250 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 22:11:15.975572 1437250 kubeadm.go:310] OS: Linux
	I0920 22:11:15.975632 1437250 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 22:11:15.975688 1437250 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 22:11:15.975738 1437250 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 22:11:15.975790 1437250 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 22:11:15.975841 1437250 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 22:11:15.975892 1437250 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 22:11:15.975940 1437250 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 22:11:15.975993 1437250 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 22:11:15.976041 1437250 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 22:11:16.041861 1437250 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:11:16.041993 1437250 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:11:16.042088 1437250 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:11:16.059615 1437250 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:11:16.062313 1437250 out.go:235]   - Generating certificates and keys ...
	I0920 22:11:16.062411 1437250 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:11:16.062514 1437250 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:11:16.580680 1437250 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 22:11:16.743104 1437250 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 22:11:17.276354 1437250 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 22:11:17.660914 1437250 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 22:11:17.937741 1437250 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 22:11:17.937961 1437250 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-860203 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 22:11:19.018327 1437250 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 22:11:19.018622 1437250 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-860203 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 22:11:19.499911 1437250 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 22:11:20.013206 1437250 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 22:11:20.280613 1437250 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 22:11:20.281040 1437250 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:11:21.243214 1437250 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:11:21.735230 1437250 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:11:21.981617 1437250 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:11:22.146843 1437250 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:11:22.398717 1437250 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:11:22.399370 1437250 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:11:22.402364 1437250 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:11:22.404454 1437250 out.go:235]   - Booting up control plane ...
	I0920 22:11:22.404569 1437250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:11:22.404652 1437250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:11:22.405337 1437250 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:11:22.416577 1437250 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:11:22.423191 1437250 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:11:22.423459 1437250 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:11:22.544766 1437250 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:11:22.544888 1437250 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:11:23.543261 1437250 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00084109s
	I0920 22:11:23.543355 1437250 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:11:31.044837 1437250 kubeadm.go:310] [api-check] The API server is healthy after 7.501662901s
	I0920 22:11:31.068377 1437250 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:11:31.082131 1437250 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:11:31.111492 1437250 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:11:31.111728 1437250 kubeadm.go:310] [mark-control-plane] Marking the node addons-860203 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:11:31.127072 1437250 kubeadm.go:310] [bootstrap-token] Using token: f9qrbx.ovidgbjqjnqjux0k
	I0920 22:11:31.130003 1437250 out.go:235]   - Configuring RBAC rules ...
	I0920 22:11:31.130139 1437250 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:11:31.134893 1437250 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:11:31.144121 1437250 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:11:31.150253 1437250 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:11:31.154299 1437250 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:11:31.158649 1437250 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:11:31.456184 1437250 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:11:31.883908 1437250 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:11:32.451654 1437250 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:11:32.452838 1437250 kubeadm.go:310] 
	I0920 22:11:32.452918 1437250 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:11:32.452932 1437250 kubeadm.go:310] 
	I0920 22:11:32.453008 1437250 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:11:32.453020 1437250 kubeadm.go:310] 
	I0920 22:11:32.453046 1437250 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:11:32.453120 1437250 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:11:32.453176 1437250 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:11:32.453185 1437250 kubeadm.go:310] 
	I0920 22:11:32.453238 1437250 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:11:32.453247 1437250 kubeadm.go:310] 
	I0920 22:11:32.453294 1437250 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:11:32.453303 1437250 kubeadm.go:310] 
	I0920 22:11:32.453355 1437250 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:11:32.453433 1437250 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:11:32.453505 1437250 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:11:32.453513 1437250 kubeadm.go:310] 
	I0920 22:11:32.453596 1437250 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:11:32.453674 1437250 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:11:32.453683 1437250 kubeadm.go:310] 
	I0920 22:11:32.453766 1437250 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f9qrbx.ovidgbjqjnqjux0k \
	I0920 22:11:32.453872 1437250 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dba785f05f9af5e10914cb4a95f718c4d16aaf429b63058a8bb7654f2b6eb9a6 \
	I0920 22:11:32.453896 1437250 kubeadm.go:310] 	--control-plane 
	I0920 22:11:32.453905 1437250 kubeadm.go:310] 
	I0920 22:11:32.453988 1437250 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:11:32.453997 1437250 kubeadm.go:310] 
	I0920 22:11:32.454077 1437250 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f9qrbx.ovidgbjqjnqjux0k \
	I0920 22:11:32.454177 1437250 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:dba785f05f9af5e10914cb4a95f718c4d16aaf429b63058a8bb7654f2b6eb9a6 
	I0920 22:11:32.457563 1437250 kubeadm.go:310] W0920 22:11:15.936153    1809 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:11:32.457960 1437250 kubeadm.go:310] W0920 22:11:15.937492    1809 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:11:32.458208 1437250 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0920 22:11:32.458326 1437250 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:11:32.458352 1437250 cni.go:84] Creating CNI manager for ""
	I0920 22:11:32.458374 1437250 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 22:11:32.461090 1437250 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:11:32.463558 1437250 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:11:32.472475 1437250 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:11:32.492833 1437250 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:11:32.492935 1437250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:11:32.493031 1437250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-860203 minikube.k8s.io/updated_at=2024_09_20T22_11_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=addons-860203 minikube.k8s.io/primary=true
	I0920 22:11:32.608941 1437250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:11:32.660696 1437250 ops.go:34] apiserver oom_adj: -16
	I0920 22:11:33.109309 1437250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:11:33.609144 1437250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:11:34.109034 1437250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:11:34.609666 1437250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:11:35.109016 1437250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:11:35.609039 1437250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:11:36.109518 1437250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:11:36.608989 1437250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:11:37.109441 1437250 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:11:37.216986 1437250 kubeadm.go:1113] duration metric: took 4.724098596s to wait for elevateKubeSystemPrivileges
	I0920 22:11:37.217079 1437250 kubeadm.go:394] duration metric: took 21.436466317s to StartCluster
	I0920 22:11:37.217137 1437250 settings.go:142] acquiring lock: {Name:mke25580fff74d1ca80ea87db010c6c8e10f6b3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:37.217302 1437250 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-1431110/kubeconfig
	I0920 22:11:37.217763 1437250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/kubeconfig: {Name:mk35d9577dd5026a704c231b87328cbc763b753d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:37.218579 1437250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 22:11:37.218898 1437250 config.go:182] Loaded profile config "addons-860203": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 22:11:37.219002 1437250 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 22:11:37.219052 1437250 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 22:11:37.219214 1437250 addons.go:69] Setting yakd=true in profile "addons-860203"
	I0920 22:11:37.219229 1437250 addons.go:234] Setting addon yakd=true in "addons-860203"
	I0920 22:11:37.219251 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.219756 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.220252 1437250 addons.go:69] Setting inspektor-gadget=true in profile "addons-860203"
	I0920 22:11:37.220271 1437250 addons.go:234] Setting addon inspektor-gadget=true in "addons-860203"
	I0920 22:11:37.220297 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.220732 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.223077 1437250 addons.go:69] Setting metrics-server=true in profile "addons-860203"
	I0920 22:11:37.227092 1437250 addons.go:234] Setting addon metrics-server=true in "addons-860203"
	I0920 22:11:37.223228 1437250 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-860203"
	I0920 22:11:37.223238 1437250 addons.go:69] Setting registry=true in profile "addons-860203"
	I0920 22:11:37.223245 1437250 addons.go:69] Setting storage-provisioner=true in profile "addons-860203"
	I0920 22:11:37.223249 1437250 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-860203"
	I0920 22:11:37.223252 1437250 addons.go:69] Setting volcano=true in profile "addons-860203"
	I0920 22:11:37.223256 1437250 addons.go:69] Setting volumesnapshots=true in profile "addons-860203"
	I0920 22:11:37.224281 1437250 addons.go:69] Setting cloud-spanner=true in profile "addons-860203"
	I0920 22:11:37.224297 1437250 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-860203"
	I0920 22:11:37.224304 1437250 addons.go:69] Setting default-storageclass=true in profile "addons-860203"
	I0920 22:11:37.224312 1437250 addons.go:69] Setting gcp-auth=true in profile "addons-860203"
	I0920 22:11:37.224319 1437250 addons.go:69] Setting ingress=true in profile "addons-860203"
	I0920 22:11:37.224332 1437250 addons.go:69] Setting ingress-dns=true in profile "addons-860203"
	I0920 22:11:37.226984 1437250 out.go:177] * Verifying Kubernetes components...
	I0920 22:11:37.227762 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.228399 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.232711 1437250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:11:37.232965 1437250 addons.go:234] Setting addon volumesnapshots=true in "addons-860203"
	I0920 22:11:37.233044 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.233642 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.239575 1437250 addons.go:234] Setting addon cloud-spanner=true in "addons-860203"
	I0920 22:11:37.239653 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.240491 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.244336 1437250 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-860203"
	I0920 22:11:37.244415 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.244969 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.262251 1437250 addons.go:234] Setting addon registry=true in "addons-860203"
	I0920 22:11:37.262314 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.262794 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.262987 1437250 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-860203"
	I0920 22:11:37.263021 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.263452 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.274302 1437250 addons.go:234] Setting addon storage-provisioner=true in "addons-860203"
	I0920 22:11:37.274369 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.274943 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.276270 1437250 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-860203"
	I0920 22:11:37.276617 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.288829 1437250 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-860203"
	I0920 22:11:37.289484 1437250 mustload.go:65] Loading cluster: addons-860203
	I0920 22:11:37.289679 1437250 config.go:182] Loaded profile config "addons-860203": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 22:11:37.289851 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.289919 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.303435 1437250 addons.go:234] Setting addon volcano=true in "addons-860203"
	I0920 22:11:37.303493 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.303974 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.304124 1437250 addons.go:234] Setting addon ingress=true in "addons-860203"
	I0920 22:11:37.304228 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.304964 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.321404 1437250 addons.go:234] Setting addon ingress-dns=true in "addons-860203"
	I0920 22:11:37.321562 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.322575 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.439929 1437250 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 22:11:37.446596 1437250 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 22:11:37.446673 1437250 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 22:11:37.446773 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.448590 1437250 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 22:11:37.452579 1437250 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 22:11:37.473169 1437250 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 22:11:37.491725 1437250 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 22:11:37.491922 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.499207 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.502678 1437250 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 22:11:37.502889 1437250 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 22:11:37.524886 1437250 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:11:37.524999 1437250 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:11:37.525125 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.541074 1437250 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 22:11:37.541167 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 22:11:37.541291 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.545526 1437250 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 22:11:37.545553 1437250 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 22:11:37.545637 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.558733 1437250 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-860203"
	I0920 22:11:37.558783 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.559751 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.561196 1437250 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 22:11:37.561452 1437250 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 22:11:37.562279 1437250 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 22:11:37.569250 1437250 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 22:11:37.569465 1437250 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 22:11:37.570498 1437250 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 22:11:37.573938 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 22:11:37.574019 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.607655 1437250 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 22:11:37.610617 1437250 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 22:11:37.610643 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 22:11:37.610726 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.610910 1437250 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 22:11:37.614877 1437250 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 22:11:37.621330 1437250 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 22:11:37.621409 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 22:11:37.621513 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.636275 1437250 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 22:11:37.650125 1437250 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 22:11:37.660364 1437250 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:11:37.660735 1437250 addons.go:234] Setting addon default-storageclass=true in "addons-860203"
	I0920 22:11:37.660786 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:37.661251 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:37.663385 1437250 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 22:11:37.667168 1437250 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:11:37.667191 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:11:37.667280 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.690198 1437250 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 22:11:37.696402 1437250 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 22:11:37.698497 1437250 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 22:11:37.701360 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:37.704465 1437250 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 22:11:37.704489 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 22:11:37.704568 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.708217 1437250 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 22:11:37.710685 1437250 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 22:11:37.715567 1437250 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 22:11:37.715591 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 22:11:37.715660 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.716190 1437250 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 22:11:37.720220 1437250 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 22:11:37.722002 1437250 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 22:11:37.722023 1437250 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 22:11:37.722099 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.768187 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:37.794048 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:37.831218 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:37.835913 1437250 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 22:11:37.836152 1437250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:11:37.838832 1437250 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 22:11:37.839984 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:37.841542 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:37.844237 1437250 out.go:177]   - Using image docker.io/busybox:stable
	I0920 22:11:37.845738 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:37.846696 1437250 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 22:11:37.846712 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 22:11:37.846773 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.879323 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:37.892329 1437250 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:11:37.892360 1437250 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:11:37.892452 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:37.914457 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:37.917059 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:37.925459 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:37.929429 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:37.970380 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:37.975451 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:38.584911 1437250 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 22:11:38.584980 1437250 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 22:11:38.589552 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 22:11:38.801380 1437250 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 22:11:38.801456 1437250 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 22:11:38.944555 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 22:11:38.987659 1437250 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 22:11:38.987737 1437250 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 22:11:39.009712 1437250 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 22:11:39.009800 1437250 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 22:11:39.058096 1437250 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 22:11:39.058171 1437250 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 22:11:39.061005 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 22:11:39.069828 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 22:11:39.075846 1437250 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 22:11:39.075936 1437250 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 22:11:39.097393 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 22:11:39.101791 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:11:39.102910 1437250 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:11:39.102961 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 22:11:39.103497 1437250 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 22:11:39.103541 1437250 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 22:11:39.114808 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:11:39.187105 1437250 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 22:11:39.187191 1437250 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 22:11:39.190492 1437250 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 22:11:39.190572 1437250 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 22:11:39.198036 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 22:11:39.216229 1437250 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 22:11:39.216303 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 22:11:39.251771 1437250 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 22:11:39.251844 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 22:11:39.279210 1437250 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 22:11:39.279295 1437250 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 22:11:39.341937 1437250 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:11:39.341962 1437250 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:11:39.357401 1437250 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 22:11:39.357432 1437250 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 22:11:39.363151 1437250 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 22:11:39.363231 1437250 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 22:11:39.533459 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 22:11:39.609446 1437250 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 22:11:39.609528 1437250 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 22:11:39.622555 1437250 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 22:11:39.622633 1437250 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 22:11:39.626083 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 22:11:39.678000 1437250 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 22:11:39.678074 1437250 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 22:11:39.700591 1437250 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:11:39.700668 1437250 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:11:39.877332 1437250 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 22:11:39.877411 1437250 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 22:11:39.886394 1437250 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 22:11:39.886482 1437250 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 22:11:39.958262 1437250 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 22:11:39.958339 1437250 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 22:11:40.011315 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:11:40.149142 1437250 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 22:11:40.149216 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 22:11:40.259405 1437250 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 22:11:40.259479 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 22:11:40.341027 1437250 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 22:11:40.341101 1437250 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 22:11:40.361129 1437250 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 22:11:40.361206 1437250 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 22:11:40.564994 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 22:11:40.650856 1437250 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 22:11:40.650929 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 22:11:40.668782 1437250 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.832829124s)
	I0920 22:11:40.668864 1437250 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 22:11:40.669723 1437250 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.833550725s)
	I0920 22:11:40.669889 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.080306783s)
	I0920 22:11:40.671313 1437250 node_ready.go:35] waiting up to 6m0s for node "addons-860203" to be "Ready" ...
	I0920 22:11:40.675450 1437250 node_ready.go:49] node "addons-860203" has status "Ready":"True"
	I0920 22:11:40.675477 1437250 node_ready.go:38] duration metric: took 4.116604ms for node "addons-860203" to be "Ready" ...
	I0920 22:11:40.675486 1437250 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:11:40.689410 1437250 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rf5d4" in "kube-system" namespace to be "Ready" ...
	I0920 22:11:40.825572 1437250 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 22:11:40.825645 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 22:11:40.969517 1437250 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 22:11:40.969590 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 22:11:41.173770 1437250 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-860203" context rescaled to 1 replicas
	I0920 22:11:41.202745 1437250 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 22:11:41.202826 1437250 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 22:11:41.225262 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 22:11:41.505942 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 22:11:42.777585 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-rf5d4" in "kube-system" namespace has status "Ready":"False"
	I0920 22:11:44.135281 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.190647616s)
	I0920 22:11:44.514691 1437250 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 22:11:44.514895 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:44.551425 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:45.222701 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-rf5d4" in "kube-system" namespace has status "Ready":"False"
	I0920 22:11:45.596165 1437250 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 22:11:45.825734 1437250 addons.go:234] Setting addon gcp-auth=true in "addons-860203"
	I0920 22:11:45.825835 1437250 host.go:66] Checking if "addons-860203" exists ...
	I0920 22:11:45.826406 1437250 cli_runner.go:164] Run: docker container inspect addons-860203 --format={{.State.Status}}
	I0920 22:11:45.858891 1437250 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 22:11:45.858944 1437250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-860203
	I0920 22:11:45.891226 1437250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33530 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/addons-860203/id_rsa Username:docker}
	I0920 22:11:47.696200 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-rf5d4" in "kube-system" namespace has status "Ready":"False"
	I0920 22:11:48.060803 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.990872571s)
	I0920 22:11:48.060889 1437250 addons.go:475] Verifying addon ingress=true in "addons-860203"
	I0920 22:11:48.064258 1437250 out.go:177] * Verifying ingress addon...
	I0920 22:11:48.067934 1437250 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 22:11:48.072659 1437250 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 22:11:48.072741 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:48.609472 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:49.098388 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:49.580860 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:49.724865 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-rf5d4" in "kube-system" namespace has status "Ready":"False"
	I0920 22:11:50.102394 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.041308403s)
	I0920 22:11:50.102458 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.004992382s)
	I0920 22:11:50.102490 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.000631606s)
	I0920 22:11:50.102737 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.987871563s)
	I0920 22:11:50.102785 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.904681865s)
	I0920 22:11:50.102819 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.569286957s)
	I0920 22:11:50.102838 1437250 addons.go:475] Verifying addon registry=true in "addons-860203"
	I0920 22:11:50.103073 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.476910711s)
	I0920 22:11:50.103362 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.091950731s)
	I0920 22:11:50.103388 1437250 addons.go:475] Verifying addon metrics-server=true in "addons-860203"
	I0920 22:11:50.103488 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.538414375s)
	W0920 22:11:50.103523 1437250 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 22:11:50.103541 1437250 retry.go:31] will retry after 160.335213ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 22:11:50.103610 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.878272707s)
	I0920 22:11:50.106087 1437250 out.go:177] * Verifying registry addon...
	I0920 22:11:50.106196 1437250 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-860203 service yakd-dashboard -n yakd-dashboard
	
	I0920 22:11:50.110817 1437250 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 22:11:50.132692 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:50.144407 1437250 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 22:11:50.144486 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:50.264927 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 22:11:50.274319 1437250 pod_ready.go:98] pod "coredns-7c65d6cfc9-rf5d4" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 22:11:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 22:11:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 22:11:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 22:11:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 22:11:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-20 22:11:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 22:11:38 +0000 UTC,FinishedAt:2024-09-20 22:11:48 +0000 UTC,ContainerID:docker://ba1badee4884246f821b6d4e006983fb2dbb8d4bfefc7a4c8f1a634c91a8e494,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://ba1badee4884246f821b6d4e006983fb2dbb8d4bfefc7a4c8f1a634c91a8e494 Started:0x4001c59d10 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x4001d18880} {Name:kube-api-access-xgnpf MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0x4001d18890}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 22:11:50.274401 1437250 pod_ready.go:82] duration metric: took 9.584956543s for pod "coredns-7c65d6cfc9-rf5d4" in "kube-system" namespace to be "Ready" ...
	E0920 22:11:50.274446 1437250 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-rf5d4" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 22:11:49 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 22:11:37 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 22:11:37 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 22:11:37 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 22:11:37 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-20 22:11:37 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 22:11:38 +0000 UTC,FinishedAt:2024-09-20 22:11:48 +0000 UTC,ContainerID:docker://ba1badee4884246f821b6d4e006983fb2dbb8d4bfefc7a4c8f1a634c91a8e494,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://ba1badee4884246f821b6d4e006983fb2dbb8d4bfefc7a4c8f1a634c91a8e494 Started:0x4001c59d10 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x4001d18880} {Name:kube-api-access-xgnpf MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x4001d18890}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 22:11:50.274470 1437250 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace to be "Ready" ...
	I0920 22:11:50.596568 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:50.709949 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:51.055291 1437250 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.196368318s)
	I0920 22:11:51.055477 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.549423299s)
	I0920 22:11:51.056452 1437250 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-860203"
	I0920 22:11:51.060403 1437250 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 22:11:51.060538 1437250 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 22:11:51.063268 1437250 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 22:11:51.064312 1437250 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 22:11:51.066411 1437250 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 22:11:51.066436 1437250 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 22:11:51.069460 1437250 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 22:11:51.069489 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:51.074256 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:51.115484 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:51.180400 1437250 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 22:11:51.180427 1437250 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 22:11:51.220128 1437250 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 22:11:51.220153 1437250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 22:11:51.337972 1437250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 22:11:51.581024 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:51.581403 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:51.615285 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:52.070443 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:52.074720 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:52.115815 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:52.280980 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace has status "Ready":"False"
	I0920 22:11:52.544528 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.279490055s)
	I0920 22:11:52.569793 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:52.573332 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:52.615303 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:52.868809 1437250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.53075837s)
	I0920 22:11:52.872362 1437250 addons.go:475] Verifying addon gcp-auth=true in "addons-860203"
	I0920 22:11:52.875255 1437250 out.go:177] * Verifying gcp-auth addon...
	I0920 22:11:52.878927 1437250 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 22:11:52.882170 1437250 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 22:11:53.071500 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:53.076046 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:53.114845 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:53.569572 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:53.573814 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:53.614756 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:54.075593 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:54.075914 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:54.115061 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:54.281561 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace has status "Ready":"False"
	I0920 22:11:54.571758 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:54.573910 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:54.614631 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:55.069620 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:55.072235 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:55.115204 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:55.570532 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:55.575898 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:55.615179 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:56.069264 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:56.074021 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:56.115266 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:56.281952 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace has status "Ready":"False"
	I0920 22:11:56.572701 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:56.575806 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:56.615449 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:57.070372 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:57.075467 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:57.116094 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:57.573471 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:57.577321 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:57.615374 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:58.069474 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:58.071874 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:58.114518 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:58.570846 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:58.573606 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:58.615626 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:58.780966 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace has status "Ready":"False"
	I0920 22:11:59.070626 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:59.073147 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:59.114658 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:11:59.570439 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:11:59.572338 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:11:59.614866 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:00.090029 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:00.093429 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:00.120456 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:00.570286 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:00.572615 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:00.616125 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:00.783218 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace has status "Ready":"False"
	I0920 22:12:01.071414 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:01.076024 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:01.115950 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:01.569672 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:01.572555 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:01.615193 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:02.069478 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:02.074466 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:02.115265 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:02.569021 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:02.572782 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:02.620312 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:03.069707 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:03.073693 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:03.115694 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:03.281298 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace has status "Ready":"False"
	I0920 22:12:03.568974 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:03.572243 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:03.614763 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:04.069847 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:04.072692 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:04.114898 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:04.569655 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:04.572102 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:04.615153 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:05.069292 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:05.073132 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:05.114737 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:05.281627 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace has status "Ready":"False"
	I0920 22:12:05.569637 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:05.571687 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:05.616166 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:06.071573 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:06.073610 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:06.115580 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:06.581307 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:06.582673 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:06.620967 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:07.070134 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:07.073038 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:07.117342 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:07.282928 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace has status "Ready":"False"
	I0920 22:12:07.576522 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:07.577644 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:07.618356 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:08.070176 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:08.074623 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:08.115475 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:08.570568 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:08.574345 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:08.615632 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:09.070874 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:09.074716 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:09.170381 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:09.571331 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:09.574207 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:09.614593 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:09.781471 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace has status "Ready":"False"
	I0920 22:12:10.071314 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:10.073685 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:10.115505 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:10.568817 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:10.572431 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:10.615420 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:11.070373 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:11.074700 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:11.170137 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:11.570444 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:11.574604 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:11.615854 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:12.069355 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:12.072893 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:12.114812 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:12.281545 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace has status "Ready":"False"
	I0920 22:12:12.569393 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:12.572716 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:12.615010 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 22:12:13.075173 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:13.076373 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:13.114373 1437250 kapi.go:107] duration metric: took 23.003555127s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 22:12:13.569916 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:13.574148 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:14.074119 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:14.074954 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:14.282441 1437250 pod_ready.go:103] pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace has status "Ready":"False"
	I0920 22:12:14.569216 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:14.573629 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:15.070820 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:15.075918 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:15.569996 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:15.573627 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:15.781503 1437250 pod_ready.go:93] pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace has status "Ready":"True"
	I0920 22:12:15.781542 1437250 pod_ready.go:82] duration metric: took 25.507030862s for pod "coredns-7c65d6cfc9-w7tzg" in "kube-system" namespace to be "Ready" ...
	I0920 22:12:15.781554 1437250 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-860203" in "kube-system" namespace to be "Ready" ...
	I0920 22:12:15.787414 1437250 pod_ready.go:93] pod "etcd-addons-860203" in "kube-system" namespace has status "Ready":"True"
	I0920 22:12:15.787448 1437250 pod_ready.go:82] duration metric: took 5.878363ms for pod "etcd-addons-860203" in "kube-system" namespace to be "Ready" ...
	I0920 22:12:15.787461 1437250 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-860203" in "kube-system" namespace to be "Ready" ...
	I0920 22:12:15.793566 1437250 pod_ready.go:93] pod "kube-apiserver-addons-860203" in "kube-system" namespace has status "Ready":"True"
	I0920 22:12:15.793592 1437250 pod_ready.go:82] duration metric: took 6.122296ms for pod "kube-apiserver-addons-860203" in "kube-system" namespace to be "Ready" ...
	I0920 22:12:15.793604 1437250 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-860203" in "kube-system" namespace to be "Ready" ...
	I0920 22:12:15.806178 1437250 pod_ready.go:93] pod "kube-controller-manager-addons-860203" in "kube-system" namespace has status "Ready":"True"
	I0920 22:12:15.806211 1437250 pod_ready.go:82] duration metric: took 12.598987ms for pod "kube-controller-manager-addons-860203" in "kube-system" namespace to be "Ready" ...
	I0920 22:12:15.806224 1437250 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tp7vm" in "kube-system" namespace to be "Ready" ...
	I0920 22:12:15.814079 1437250 pod_ready.go:93] pod "kube-proxy-tp7vm" in "kube-system" namespace has status "Ready":"True"
	I0920 22:12:15.814104 1437250 pod_ready.go:82] duration metric: took 7.873289ms for pod "kube-proxy-tp7vm" in "kube-system" namespace to be "Ready" ...
	I0920 22:12:15.814116 1437250 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-860203" in "kube-system" namespace to be "Ready" ...
	I0920 22:12:16.069668 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:16.073236 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:16.178380 1437250 pod_ready.go:93] pod "kube-scheduler-addons-860203" in "kube-system" namespace has status "Ready":"True"
	I0920 22:12:16.178409 1437250 pod_ready.go:82] duration metric: took 364.275757ms for pod "kube-scheduler-addons-860203" in "kube-system" namespace to be "Ready" ...
	I0920 22:12:16.178422 1437250 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-wlbkx" in "kube-system" namespace to be "Ready" ...
	I0920 22:12:16.569728 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:16.574289 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:16.578468 1437250 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-wlbkx" in "kube-system" namespace has status "Ready":"True"
	I0920 22:12:16.578495 1437250 pod_ready.go:82] duration metric: took 400.064566ms for pod "nvidia-device-plugin-daemonset-wlbkx" in "kube-system" namespace to be "Ready" ...
	I0920 22:12:16.578507 1437250 pod_ready.go:39] duration metric: took 35.902985364s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:12:16.578552 1437250 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:12:16.578638 1437250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:12:16.597544 1437250 api_server.go:72] duration metric: took 39.378420851s to wait for apiserver process to appear ...
	I0920 22:12:16.597568 1437250 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:12:16.597590 1437250 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 22:12:16.606245 1437250 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 22:12:16.607474 1437250 api_server.go:141] control plane version: v1.31.1
	I0920 22:12:16.607530 1437250 api_server.go:131] duration metric: took 9.949068ms to wait for apiserver health ...
	I0920 22:12:16.607540 1437250 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:12:16.784943 1437250 system_pods.go:59] 17 kube-system pods found
	I0920 22:12:16.785025 1437250 system_pods.go:61] "coredns-7c65d6cfc9-w7tzg" [4df5ca80-0514-4634-bfa0-27307f7133e3] Running
	I0920 22:12:16.785045 1437250 system_pods.go:61] "csi-hostpath-attacher-0" [036c23ca-8854-41ae-b95b-e9b5a6a4fa2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 22:12:16.785054 1437250 system_pods.go:61] "csi-hostpath-resizer-0" [d569ef90-d5ff-49fd-a8b6-3b9c39e412a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 22:12:16.785064 1437250 system_pods.go:61] "csi-hostpathplugin-8q6xm" [48e117ad-e715-422d-b840-70ab42935c7a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 22:12:16.785070 1437250 system_pods.go:61] "etcd-addons-860203" [e2401b4b-71ab-4b09-93e9-247713c004ee] Running
	I0920 22:12:16.785075 1437250 system_pods.go:61] "kube-apiserver-addons-860203" [6110017c-4455-4b0e-bae6-ab8166b24fe8] Running
	I0920 22:12:16.785097 1437250 system_pods.go:61] "kube-controller-manager-addons-860203" [41ca8282-9ca5-484e-a14f-b08d69bd1ef9] Running
	I0920 22:12:16.785109 1437250 system_pods.go:61] "kube-ingress-dns-minikube" [2b30cb1f-e600-463c-853c-b2e01cb43d56] Running
	I0920 22:12:16.785113 1437250 system_pods.go:61] "kube-proxy-tp7vm" [b1e4ff89-62f4-4d96-ad62-f9a30feff694] Running
	I0920 22:12:16.785117 1437250 system_pods.go:61] "kube-scheduler-addons-860203" [b99b3cb3-0f4a-47f2-bc1e-3d0a0ba599c6] Running
	I0920 22:12:16.785123 1437250 system_pods.go:61] "metrics-server-84c5f94fbc-px9x8" [dc37fd84-501c-485d-a0df-4e3a4e3d94e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:12:16.785130 1437250 system_pods.go:61] "nvidia-device-plugin-daemonset-wlbkx" [a7d9294a-f162-4ea1-8e95-8df04a4f0793] Running
	I0920 22:12:16.785138 1437250 system_pods.go:61] "registry-66c9cd494c-zscw5" [1b3b7fa1-d8ab-4ede-bf68-c006a0d0c180] Running
	I0920 22:12:16.785143 1437250 system_pods.go:61] "registry-proxy-swlk5" [8167ccc6-dd33-4103-903e-3eed5bbba124] Running
	I0920 22:12:16.785154 1437250 system_pods.go:61] "snapshot-controller-56fcc65765-kbg2h" [83f65f47-7039-4eda-9622-3baf4fa2e817] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 22:12:16.785162 1437250 system_pods.go:61] "snapshot-controller-56fcc65765-nzjxr" [be1f88b4-cbff-461b-bda8-f2bee3c8353e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 22:12:16.785166 1437250 system_pods.go:61] "storage-provisioner" [e9bc59af-936f-46b8-8ae6-a5e35e5d0f03] Running
	I0920 22:12:16.785181 1437250 system_pods.go:74] duration metric: took 177.631797ms to wait for pod list to return data ...
	I0920 22:12:16.785190 1437250 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:12:16.978073 1437250 default_sa.go:45] found service account: "default"
	I0920 22:12:16.978101 1437250 default_sa.go:55] duration metric: took 192.90107ms for default service account to be created ...
	I0920 22:12:16.978112 1437250 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:12:17.071411 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:17.075595 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:17.195414 1437250 system_pods.go:86] 17 kube-system pods found
	I0920 22:12:17.195461 1437250 system_pods.go:89] "coredns-7c65d6cfc9-w7tzg" [4df5ca80-0514-4634-bfa0-27307f7133e3] Running
	I0920 22:12:17.195475 1437250 system_pods.go:89] "csi-hostpath-attacher-0" [036c23ca-8854-41ae-b95b-e9b5a6a4fa2f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 22:12:17.195490 1437250 system_pods.go:89] "csi-hostpath-resizer-0" [d569ef90-d5ff-49fd-a8b6-3b9c39e412a0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 22:12:17.195499 1437250 system_pods.go:89] "csi-hostpathplugin-8q6xm" [48e117ad-e715-422d-b840-70ab42935c7a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 22:12:17.195515 1437250 system_pods.go:89] "etcd-addons-860203" [e2401b4b-71ab-4b09-93e9-247713c004ee] Running
	I0920 22:12:17.195521 1437250 system_pods.go:89] "kube-apiserver-addons-860203" [6110017c-4455-4b0e-bae6-ab8166b24fe8] Running
	I0920 22:12:17.195534 1437250 system_pods.go:89] "kube-controller-manager-addons-860203" [41ca8282-9ca5-484e-a14f-b08d69bd1ef9] Running
	I0920 22:12:17.195546 1437250 system_pods.go:89] "kube-ingress-dns-minikube" [2b30cb1f-e600-463c-853c-b2e01cb43d56] Running
	I0920 22:12:17.195554 1437250 system_pods.go:89] "kube-proxy-tp7vm" [b1e4ff89-62f4-4d96-ad62-f9a30feff694] Running
	I0920 22:12:17.195558 1437250 system_pods.go:89] "kube-scheduler-addons-860203" [b99b3cb3-0f4a-47f2-bc1e-3d0a0ba599c6] Running
	I0920 22:12:17.195569 1437250 system_pods.go:89] "metrics-server-84c5f94fbc-px9x8" [dc37fd84-501c-485d-a0df-4e3a4e3d94e7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:12:17.195577 1437250 system_pods.go:89] "nvidia-device-plugin-daemonset-wlbkx" [a7d9294a-f162-4ea1-8e95-8df04a4f0793] Running
	I0920 22:12:17.195582 1437250 system_pods.go:89] "registry-66c9cd494c-zscw5" [1b3b7fa1-d8ab-4ede-bf68-c006a0d0c180] Running
	I0920 22:12:17.195586 1437250 system_pods.go:89] "registry-proxy-swlk5" [8167ccc6-dd33-4103-903e-3eed5bbba124] Running
	I0920 22:12:17.195597 1437250 system_pods.go:89] "snapshot-controller-56fcc65765-kbg2h" [83f65f47-7039-4eda-9622-3baf4fa2e817] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 22:12:17.195608 1437250 system_pods.go:89] "snapshot-controller-56fcc65765-nzjxr" [be1f88b4-cbff-461b-bda8-f2bee3c8353e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 22:12:17.195612 1437250 system_pods.go:89] "storage-provisioner" [e9bc59af-936f-46b8-8ae6-a5e35e5d0f03] Running
	I0920 22:12:17.195622 1437250 system_pods.go:126] duration metric: took 217.504063ms to wait for k8s-apps to be running ...
	I0920 22:12:17.195637 1437250 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:12:17.195697 1437250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:12:17.209092 1437250 system_svc.go:56] duration metric: took 13.443744ms WaitForService to wait for kubelet
	I0920 22:12:17.209121 1437250 kubeadm.go:582] duration metric: took 39.990002197s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:12:17.209143 1437250 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:12:17.378840 1437250 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 22:12:17.378878 1437250 node_conditions.go:123] node cpu capacity is 2
	I0920 22:12:17.378892 1437250 node_conditions.go:105] duration metric: took 169.743756ms to run NodePressure ...
	I0920 22:12:17.378904 1437250 start.go:241] waiting for startup goroutines ...
	I0920 22:12:17.569704 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:17.573890 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:18.069434 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:18.073436 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:18.569509 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:18.573129 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:19.074658 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:19.076517 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:19.571160 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:19.578446 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:20.081970 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:20.083386 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:20.575562 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:20.577078 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:21.074451 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:21.077542 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:21.571910 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:21.572264 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:22.069017 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:22.072398 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:22.572905 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:22.577112 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:23.069249 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:23.072138 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:23.572341 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:23.572780 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:24.070229 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:24.074130 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:24.570175 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:24.572830 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:25.068718 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:25.072275 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:25.570459 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:25.572966 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:26.070278 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:26.072860 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:26.581808 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:26.584603 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:27.069608 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:27.071728 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:27.573598 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:27.574575 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:28.068765 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:28.071917 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:28.569407 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:28.573314 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:29.070583 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:29.073099 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:29.571068 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:29.577051 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:30.072740 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:30.076119 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:30.570336 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:30.572703 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:31.069903 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:31.073730 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:31.569375 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:31.573305 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:32.069508 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:32.073736 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:32.568873 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:32.572486 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:33.072421 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:33.074169 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:33.571185 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:33.574032 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:34.071261 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:34.073990 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:34.570067 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:34.572068 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:35.071901 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:35.075511 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:35.568912 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:35.572572 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:36.071155 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:36.072732 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:36.572955 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:36.576596 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:37.069604 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:37.074647 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:37.587715 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:37.589001 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:38.072179 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:38.074856 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:38.570098 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:38.573880 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:39.069715 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:39.072133 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:39.569866 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:39.571397 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:40.070860 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:40.078192 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:40.569125 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:40.573058 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:41.069016 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:41.072831 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:41.569182 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:41.572933 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:42.074067 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:42.075734 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:42.569658 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:42.573692 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:43.072208 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:43.074987 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:43.570277 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:43.574040 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:44.069416 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:44.072620 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:44.569883 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:44.574077 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:45.070052 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 22:12:45.074676 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:45.571228 1437250 kapi.go:107] duration metric: took 54.506904248s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 22:12:45.572983 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:46.072565 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:46.572261 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:47.072320 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:47.572889 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:48.073233 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:48.574733 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:49.072846 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:49.573508 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:50.072935 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:50.572028 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:51.073084 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:51.573811 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:52.072701 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:52.572752 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:53.080231 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:53.578429 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:54.073801 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:54.573162 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:55.073739 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:55.573341 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:56.072811 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:56.572969 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:57.073327 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:57.572787 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:58.074215 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:58.576651 1437250 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 22:12:59.072992 1437250 kapi.go:107] duration metric: took 1m11.005054561s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 22:13:15.900773 1437250 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 22:13:15.900801 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:16.382375 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:16.883006 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:17.382638 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:17.883065 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:18.382021 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:18.882639 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:19.383139 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:19.883141 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:20.382007 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:20.882766 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:21.382997 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:21.882689 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:22.383238 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:22.883092 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:23.385215 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:23.882700 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:24.382785 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:24.883497 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:25.383086 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:25.883165 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:26.382259 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:26.883157 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:27.382748 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:27.882863 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:28.382550 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:28.883054 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:29.382859 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:29.882496 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:30.382735 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:30.882950 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:31.382578 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:31.882841 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:32.382098 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:32.883371 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:33.383038 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:33.882272 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:34.383093 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:34.883132 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:35.383059 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:35.882902 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:36.382983 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:36.882228 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:37.382908 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:37.882915 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:38.382698 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:38.882175 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:39.382712 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:39.883213 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:40.382969 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:40.883097 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:41.383018 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:41.882622 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:42.382111 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:42.882845 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:43.382976 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:43.883176 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:44.382966 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:44.882673 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:45.382723 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:45.882726 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:46.382239 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:46.882268 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:47.383103 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:47.883513 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:48.382842 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:48.882124 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:49.382905 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:49.882525 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:50.382444 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:50.882238 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:51.383202 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:51.882563 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:52.382050 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:52.883331 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:53.382552 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:53.882961 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:54.382466 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:54.883044 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:55.382338 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:55.886082 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:56.383253 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:56.882841 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:57.382135 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:57.882640 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:58.382299 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:58.883001 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:59.383215 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:13:59.882964 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:00.382849 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:00.882565 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:01.382677 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:01.882645 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:02.382503 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:02.882450 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:03.382200 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:03.882851 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:04.382555 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:04.882983 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:05.382102 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:05.883014 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:06.396502 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:06.883479 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:07.383733 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:07.882563 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:08.382122 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:08.882899 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:09.382060 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:09.883558 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:10.382224 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:10.886171 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:11.382559 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:11.882425 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:12.382918 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:12.882643 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:13.381950 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:13.883082 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:14.382647 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:14.882667 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:15.382660 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:15.882512 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:16.382607 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:16.883306 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:17.382176 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:17.883710 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:18.382509 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:18.882393 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:19.383056 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:19.883240 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:20.383804 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:20.883405 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:21.382974 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:21.883689 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:22.382426 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:22.883756 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:23.383519 1437250 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 22:14:23.882732 1437250 kapi.go:107] duration metric: took 2m31.003803617s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 22:14:23.885510 1437250 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-860203 cluster.
	I0920 22:14:23.888520 1437250 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 22:14:23.890907 1437250 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 22:14:23.893582 1437250 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner-rancher, volcano, nvidia-device-plugin, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 22:14:23.896169 1437250 addons.go:510] duration metric: took 2m46.677112624s for enable addons: enabled=[cloud-spanner storage-provisioner-rancher volcano nvidia-device-plugin storage-provisioner ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 22:14:23.896220 1437250 start.go:246] waiting for cluster config update ...
	I0920 22:14:23.896242 1437250 start.go:255] writing updated cluster config ...
	I0920 22:14:23.896548 1437250 ssh_runner.go:195] Run: rm -f paused
	I0920 22:14:24.269362 1437250 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:14:24.274574 1437250 out.go:177] * Done! kubectl is now configured to use "addons-860203" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 20 22:23:53 addons-860203 dockerd[1282]: time="2024-09-20T22:23:53.725117343Z" level=info msg="ignoring event" container=83f7d0a4ecae5db3fc3f5c17169b823efcbca706861f15f9dbbf9b35c4e75e74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:23:53 addons-860203 dockerd[1282]: time="2024-09-20T22:23:53.749955520Z" level=info msg="ignoring event" container=b3e08f7e4d625f301e3845c71981f1bb9a4d926eb019cd967f426789dc1ea8b5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:23:53 addons-860203 dockerd[1282]: time="2024-09-20T22:23:53.755641916Z" level=info msg="ignoring event" container=e4138036d0ac00ce0a02349d54f6551392743ec2c25ded0f64396efcf686bf20 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:23:53 addons-860203 dockerd[1282]: time="2024-09-20T22:23:53.763526144Z" level=info msg="ignoring event" container=110fec0668eefcf6b8763e073f69ca0325ab0e17209e38149ffcebb5b780c395 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:23:53 addons-860203 dockerd[1282]: time="2024-09-20T22:23:53.763574422Z" level=info msg="ignoring event" container=f3f86dff224fe96ce7bf528c1b0d05fc6ab91dc70b7b96a5d41476aaf32b675d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:23:53 addons-860203 dockerd[1282]: time="2024-09-20T22:23:53.775967161Z" level=info msg="ignoring event" container=5d13128e9adb98dcd8db2b122c6fea62ac6c385eedff1e49a6363b8159fc9425 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:23:53 addons-860203 dockerd[1282]: time="2024-09-20T22:23:53.965525704Z" level=info msg="ignoring event" container=51b3d2055313a63a604ead49fa3ca2859d0dd4e8bad6377eda4e7b92947e2ef4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:23:54 addons-860203 dockerd[1282]: time="2024-09-20T22:23:54.007389345Z" level=info msg="ignoring event" container=47be1ece9b6c5a729b3013bae4cced9fee797d592e31618194936c674c587a43 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:23:54 addons-860203 dockerd[1282]: time="2024-09-20T22:23:54.102647773Z" level=info msg="ignoring event" container=e9c6880573429dc88b658d4954bf8b239c148d2a2ceea222890be3a3831617f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:24:00 addons-860203 dockerd[1282]: time="2024-09-20T22:24:00.367782561Z" level=info msg="ignoring event" container=1bfd949fb4b7f1661158ab7420403dec81db60b8aab426ac078e5b3eda729b8b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:24:00 addons-860203 dockerd[1282]: time="2024-09-20T22:24:00.373516988Z" level=info msg="ignoring event" container=e827e5daba0504ce9914849f072316c09f8b9fc1958e70e12d28aa05e0b38b39 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:24:00 addons-860203 dockerd[1282]: time="2024-09-20T22:24:00.544869441Z" level=info msg="ignoring event" container=d674fc1eaac19655c1dc012882854df5dcbdabec01cd83ad3b766bfb1349f5dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:24:00 addons-860203 dockerd[1282]: time="2024-09-20T22:24:00.582409331Z" level=info msg="ignoring event" container=06bd94a4ea75ea00bc1de939b7ec2c4bcb1ad9553cd3bfb365cebf23355d3fe7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:24:04 addons-860203 dockerd[1282]: time="2024-09-20T22:24:04.995048093Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=e545bede8beb29ba traceID=9fb1fd2bc189b50e99b3c749b43e0b6e
	Sep 20 22:24:04 addons-860203 dockerd[1282]: time="2024-09-20T22:24:04.997748331Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=e545bede8beb29ba traceID=9fb1fd2bc189b50e99b3c749b43e0b6e
	Sep 20 22:24:08 addons-860203 dockerd[1282]: time="2024-09-20T22:24:08.125727092Z" level=info msg="ignoring event" container=09b27f82d2525319b104b10043caf255c35435fc4f0772b7f2f0759bf58c0ad5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:24:08 addons-860203 dockerd[1282]: time="2024-09-20T22:24:08.236862282Z" level=info msg="ignoring event" container=6d870ad001cdcba93d73640522dc5f6fd7b7487307d05c71aa518abf39fb14e0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:24:13 addons-860203 dockerd[1282]: time="2024-09-20T22:24:13.666119466Z" level=info msg="ignoring event" container=f2900678f47ea0fbbf80984945e70120ec9ced207d0ba3e99076dd817e488f0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:24:19 addons-860203 cri-dockerd[1541]: time="2024-09-20T22:24:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2275a0ac56676415b0d88fbda0e0848b627faf97c51be18ef333d299ff00173d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 20 22:24:20 addons-860203 dockerd[1282]: time="2024-09-20T22:24:20.682662762Z" level=info msg="ignoring event" container=a4987316cd4e41fce6d7e6e52b88a652b0721ce5dab1cf719b20976d420e87f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:24:21 addons-860203 dockerd[1282]: time="2024-09-20T22:24:21.415029435Z" level=info msg="ignoring event" container=1d40517a4d0780badfeaad14145f786c32f32e9e1aa39d93b521e5a6b185d84d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:24:21 addons-860203 dockerd[1282]: time="2024-09-20T22:24:21.485779334Z" level=info msg="ignoring event" container=581bcc27681cda312ee74f866f6a716c042d83e3495786b1231b573f6260146f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:24:21 addons-860203 dockerd[1282]: time="2024-09-20T22:24:21.742614826Z" level=info msg="ignoring event" container=2b8bbda23c6b1c84d4fcb58bd72885bf0ed2833bea43bc18d14fc88e1737e3f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:24:21 addons-860203 dockerd[1282]: time="2024-09-20T22:24:21.793093312Z" level=info msg="ignoring event" container=fa6d507684b289059066fe9f9a5c93d36ba11ff3b49b335ce2cf58218caa53ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:24:22 addons-860203 cri-dockerd[1541]: time="2024-09-20T22:24:22Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	7b88ef8f0095e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   d3e44e21fdfd1       gcp-auth-89d5ffd79-jsv2g
	f14c05d4e9f82       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   ae6129ff47bce       ingress-nginx-controller-bc57996ff-56r52
	1e7aab800c2f3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   2348c60939522       ingress-nginx-admission-patch-c6t65
	30634885c99a9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   0c753df523a2d       ingress-nginx-admission-create-2qtkk
	d79942234900d       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   225b367b4ec82       yakd-dashboard-67d98fc6b-96gz9
	fdf457d9ecf58       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   f80108fb969a0       local-path-provisioner-86d989889c-kkss2
	02d9e5a8831c8       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   2d7284b056bd6       kube-ingress-dns-minikube
	8bf327597a89e       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   dd5662f9247b8       cloud-spanner-emulator-769b77f747-4vnl6
	1f67045493b63       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   f3b4d2190acad       nvidia-device-plugin-daemonset-wlbkx
	0a3c2a788d0b1       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   4206361768b32       storage-provisioner
	8f8348c729ba0       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   512bf0455e90d       coredns-7c65d6cfc9-w7tzg
	fcebcf9757ce5       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   5eb4d7f1ddb0e       kube-proxy-tp7vm
	616c434207d54       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   0ceddb5874fe9       kube-scheduler-addons-860203
	66a110939b233       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   a045805387d09       kube-controller-manager-addons-860203
	4c9ff983a7180       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   1ec40a33eb482       kube-apiserver-addons-860203
	ba145fc220196       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   b92e0a7ea730b       etcd-addons-860203
	
	
	==> controller_ingress [f14c05d4e9f8] <==
	I0920 22:12:58.841462       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"0dc6dce7-a9dc-4c6a-943c-ec6218701f52", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0920 22:12:58.841751       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"b94024cf-2d54-454a-9c27-8b68c633b710", APIVersion:"v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0920 22:13:00.033621       7 nginx.go:317] "Starting NGINX process"
	I0920 22:13:00.034015       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0920 22:13:00.034114       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0920 22:13:00.037463       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0920 22:13:00.142236       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0920 22:13:00.148361       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-56r52"
	I0920 22:13:00.185339       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-56r52" node="addons-860203"
	I0920 22:13:00.254209       7 controller.go:213] "Backend successfully reloaded"
	I0920 22:13:00.254561       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0920 22:13:00.255099       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-56r52", UID:"5bcb167f-27eb-4a1a-9ca0-8c5812059942", APIVersion:"v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0920 22:24:19.182116       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0920 22:24:19.200589       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.018s renderingIngressLength:1 renderingIngressTime:0.001s admissionTime:0.019s testedConfigurationSize:18.1kB}
	I0920 22:24:19.200630       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0920 22:24:19.205732       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W0920 22:24:19.206090       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0920 22:24:19.206225       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0920 22:24:19.209771       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"9d8a8776-81df-4985-bcbe-b6fdc8c799e6", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2883", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0920 22:24:19.261512       7 controller.go:213] "Backend successfully reloaded"
	I0920 22:24:19.262201       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-56r52", UID:"5bcb167f-27eb-4a1a-9ca0-8c5812059942", APIVersion:"v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0920 22:24:22.539639       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0920 22:24:22.539756       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0920 22:24:22.601125       7 controller.go:213] "Backend successfully reloaded"
	I0920 22:24:22.601583       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-56r52", UID:"5bcb167f-27eb-4a1a-9ca0-8c5812059942", APIVersion:"v1", ResourceVersion:"729", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [8f8348c729ba] <==
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 10.244.0.7:59093 - 39691 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0003121s
	[INFO] 10.244.0.7:59093 - 33038 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000098845s
	[INFO] 10.244.0.7:56464 - 60585 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000200939s
	[INFO] 10.244.0.7:56464 - 32429 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093258s
	[INFO] 10.244.0.7:47212 - 51823 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000202309s
	[INFO] 10.244.0.7:47212 - 21100 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000099502s
	[INFO] 10.244.0.7:51113 - 1889 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000161383s
	[INFO] 10.244.0.7:51113 - 57442 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000131164s
	[INFO] 10.244.0.7:57065 - 34787 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00262487s
	[INFO] 10.244.0.7:57065 - 62191 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00260229s
	[INFO] 10.244.0.7:57241 - 47901 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077307s
	[INFO] 10.244.0.7:57241 - 30239 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000060783s
	[INFO] 10.244.0.25:58067 - 34969 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000247477s
	[INFO] 10.244.0.25:55099 - 20651 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000167364s
	[INFO] 10.244.0.25:52419 - 37445 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123575s
	[INFO] 10.244.0.25:43893 - 64624 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000099289s
	[INFO] 10.244.0.25:47208 - 12557 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106927s
	[INFO] 10.244.0.25:46457 - 56668 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00019856s
	[INFO] 10.244.0.25:36619 - 11723 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002222294s
	[INFO] 10.244.0.25:41967 - 36166 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002127962s
	[INFO] 10.244.0.25:41478 - 55613 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00152365s
	[INFO] 10.244.0.25:47203 - 56098 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002148162s
	
	
	==> describe nodes <==
	Name:               addons-860203
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-860203
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=addons-860203
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T22_11_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-860203
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 22:11:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-860203
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 22:24:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 22:23:36 +0000   Fri, 20 Sep 2024 22:11:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 22:23:36 +0000   Fri, 20 Sep 2024 22:11:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 22:23:36 +0000   Fri, 20 Sep 2024 22:11:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 22:23:36 +0000   Fri, 20 Sep 2024 22:11:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-860203
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c1f1054b3bb444faf89dca85133d55f
	  System UUID:                3634163e-a8a2-4883-bafd-fa13fefb6dc0
	  Boot ID:                    32c222cc-d06c-4f68-9fc3-59cd35d0dbd2
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m18s
	  default                     cloud-spanner-emulator-769b77f747-4vnl6     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  gcp-auth                    gcp-auth-89d5ffd79-jsv2g                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-56r52    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-w7tzg                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-860203                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-860203                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-860203       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-tp7vm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-860203                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-wlbkx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-kkss2     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-96gz9              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (4%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-860203 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x7 over 13m)  kubelet          Node addons-860203 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-860203 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-860203 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-860203 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-860203 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-860203 event: Registered Node addons-860203 in Controller
	
	
	==> dmesg <==
	[Sep20 21:43] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.714455] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [ba145fc22019] <==
	{"level":"info","ts":"2024-09-20T22:11:24.311290Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-20T22:11:24.311382Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-20T22:11:25.292184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T22:11:25.292417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T22:11:25.292495Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-20T22:11:25.292542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T22:11:25.292584Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T22:11:25.292639Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-20T22:11:25.292669Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T22:11:25.296285Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-860203 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T22:11:25.296639Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:11:25.296806Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:11:25.297027Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T22:11:25.297131Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T22:11:25.297268Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:11:25.297915Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:11:25.298201Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:11:25.298341Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:11:25.304790Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:11:25.306260Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T22:11:25.319558Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:11:25.339930Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T22:21:27.299229Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1897}
	{"level":"info","ts":"2024-09-20T22:21:27.362927Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1897,"took":"63.093314ms","hash":280868067,"current-db-size-bytes":8855552,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":5070848,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-09-20T22:21:27.362977Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":280868067,"revision":1897,"compact-revision":-1}
	
	
	==> gcp-auth [7b88ef8f0095] <==
	2024/09/20 22:14:23 GCP Auth Webhook started!
	2024/09/20 22:14:40 Ready to marshal response ...
	2024/09/20 22:14:40 Ready to write response ...
	2024/09/20 22:14:41 Ready to marshal response ...
	2024/09/20 22:14:41 Ready to write response ...
	2024/09/20 22:15:05 Ready to marshal response ...
	2024/09/20 22:15:05 Ready to write response ...
	2024/09/20 22:15:05 Ready to marshal response ...
	2024/09/20 22:15:05 Ready to write response ...
	2024/09/20 22:15:05 Ready to marshal response ...
	2024/09/20 22:15:05 Ready to write response ...
	2024/09/20 22:23:09 Ready to marshal response ...
	2024/09/20 22:23:09 Ready to write response ...
	2024/09/20 22:23:10 Ready to marshal response ...
	2024/09/20 22:23:10 Ready to write response ...
	2024/09/20 22:23:10 Ready to marshal response ...
	2024/09/20 22:23:10 Ready to write response ...
	2024/09/20 22:23:20 Ready to marshal response ...
	2024/09/20 22:23:20 Ready to write response ...
	2024/09/20 22:23:27 Ready to marshal response ...
	2024/09/20 22:23:27 Ready to write response ...
	2024/09/20 22:23:43 Ready to marshal response ...
	2024/09/20 22:23:43 Ready to write response ...
	2024/09/20 22:24:19 Ready to marshal response ...
	2024/09/20 22:24:19 Ready to write response ...
	
	
	==> kernel <==
	 22:24:23 up  6:06,  0 users,  load average: 0.72, 0.76, 1.57
	Linux addons-860203 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [4c9ff983a718] <==
	W0920 22:14:57.345476       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0920 22:14:57.345574       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0920 22:14:57.444135       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 22:14:57.705337       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 22:14:57.978115       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0920 22:23:10.020875       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.142.98"}
	I0920 22:23:35.429600       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 22:23:59.940014       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 22:23:59.940065       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 22:23:59.965207       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 22:23:59.965253       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 22:23:59.987740       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 22:23:59.987810       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 22:24:00.147857       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 22:24:00.149264       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 22:24:00.152658       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 22:24:00.152709       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0920 22:24:00.459969       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	W0920 22:24:01.140879       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 22:24:01.153800       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 22:24:01.162850       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0920 22:24:13.596422       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 22:24:14.718639       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 22:24:19.201631       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 22:24:19.493001       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.55.196"}
	
	
	==> kube-controller-manager [66a110939b23] <==
	E0920 22:24:09.209747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:24:09.225388       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:24:09.225431       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:24:09.364111       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:24:09.364155       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:24:09.889534       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:24:09.889580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:24:10.300790       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:24:10.300841       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:24:11.413303       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:24:11.413356       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0920 22:24:14.720249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:24:15.642731       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:24:15.642774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:24:17.375161       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:24:17.375213       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:24:17.465858       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:24:17.465902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:24:18.130737       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:24:18.130785       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 22:24:21.332215       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="22.58µs"
	W0920 22:24:21.556015       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:24:21.556053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:24:22.000233       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:24:22.000277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [fcebcf9757ce] <==
	I0920 22:11:38.511359       1 server_linux.go:66] "Using iptables proxy"
	I0920 22:11:38.607608       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 22:11:38.607682       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 22:11:38.698966       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 22:11:38.699028       1 server_linux.go:169] "Using iptables Proxier"
	I0920 22:11:38.701660       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 22:11:38.702008       1 server.go:483] "Version info" version="v1.31.1"
	I0920 22:11:38.702024       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 22:11:38.715478       1 config.go:199] "Starting service config controller"
	I0920 22:11:38.716066       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 22:11:38.716120       1 config.go:105] "Starting endpoint slice config controller"
	I0920 22:11:38.716126       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 22:11:38.718945       1 config.go:328] "Starting node config controller"
	I0920 22:11:38.718963       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 22:11:38.816550       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 22:11:38.816617       1 shared_informer.go:320] Caches are synced for service config
	I0920 22:11:38.820621       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [616c434207d5] <==
	W0920 22:11:29.320501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 22:11:29.322835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:11:29.320553       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 22:11:29.323086       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 22:11:30.118540       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 22:11:30.118607       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 22:11:30.145961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 22:11:30.146111       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:11:30.219181       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 22:11:30.219333       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 22:11:30.230651       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 22:11:30.230938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:11:30.359746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 22:11:30.359793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 22:11:30.362819       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 22:11:30.362858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 22:11:30.374119       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 22:11:30.374161       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:11:30.387908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 22:11:30.388027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:11:30.444340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 22:11:30.444478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:11:30.559806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 22:11:30.559852       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 22:11:31.895061       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 22:24:19 addons-860203 kubelet[2333]: I0920 22:24:19.426478    2333 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc37fd84-501c-485d-a0df-4e3a4e3d94e7" containerName="metrics-server"
	Sep 20 22:24:19 addons-860203 kubelet[2333]: I0920 22:24:19.426484    2333 memory_manager.go:354] "RemoveStaleState removing state" podUID="036c23ca-8854-41ae-b95b-e9b5a6a4fa2f" containerName="csi-attacher"
	Sep 20 22:24:19 addons-860203 kubelet[2333]: I0920 22:24:19.429125    2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b1ddce0f-84aa-4e65-bdf8-3b5b54f4fb61-gcp-creds\") pod \"nginx\" (UID: \"b1ddce0f-84aa-4e65-bdf8-3b5b54f4fb61\") " pod="default/nginx"
	Sep 20 22:24:19 addons-860203 kubelet[2333]: I0920 22:24:19.429340    2333 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wzt9\" (UniqueName: \"kubernetes.io/projected/b1ddce0f-84aa-4e65-bdf8-3b5b54f4fb61-kube-api-access-8wzt9\") pod \"nginx\" (UID: \"b1ddce0f-84aa-4e65-bdf8-3b5b54f4fb61\") " pod="default/nginx"
	Sep 20 22:24:20 addons-860203 kubelet[2333]: I0920 22:24:20.941294    2333 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw74s\" (UniqueName: \"kubernetes.io/projected/817592fb-1c62-4533-974f-078a77465ea2-kube-api-access-kw74s\") pod \"817592fb-1c62-4533-974f-078a77465ea2\" (UID: \"817592fb-1c62-4533-974f-078a77465ea2\") "
	Sep 20 22:24:20 addons-860203 kubelet[2333]: I0920 22:24:20.941349    2333 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/817592fb-1c62-4533-974f-078a77465ea2-gcp-creds\") pod \"817592fb-1c62-4533-974f-078a77465ea2\" (UID: \"817592fb-1c62-4533-974f-078a77465ea2\") "
	Sep 20 22:24:20 addons-860203 kubelet[2333]: I0920 22:24:20.941438    2333 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/817592fb-1c62-4533-974f-078a77465ea2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "817592fb-1c62-4533-974f-078a77465ea2" (UID: "817592fb-1c62-4533-974f-078a77465ea2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 22:24:20 addons-860203 kubelet[2333]: I0920 22:24:20.946318    2333 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817592fb-1c62-4533-974f-078a77465ea2-kube-api-access-kw74s" (OuterVolumeSpecName: "kube-api-access-kw74s") pod "817592fb-1c62-4533-974f-078a77465ea2" (UID: "817592fb-1c62-4533-974f-078a77465ea2"). InnerVolumeSpecName "kube-api-access-kw74s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 22:24:21 addons-860203 kubelet[2333]: I0920 22:24:21.042199    2333 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/817592fb-1c62-4533-974f-078a77465ea2-gcp-creds\") on node \"addons-860203\" DevicePath \"\""
	Sep 20 22:24:21 addons-860203 kubelet[2333]: I0920 22:24:21.042230    2333 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kw74s\" (UniqueName: \"kubernetes.io/projected/817592fb-1c62-4533-974f-078a77465ea2-kube-api-access-kw74s\") on node \"addons-860203\" DevicePath \"\""
	Sep 20 22:24:21 addons-860203 kubelet[2333]: I0920 22:24:21.832867    2333 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="817592fb-1c62-4533-974f-078a77465ea2" path="/var/lib/kubelet/pods/817592fb-1c62-4533-974f-078a77465ea2/volumes"
	Sep 20 22:24:22 addons-860203 kubelet[2333]: I0920 22:24:22.048431    2333 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j27sh\" (UniqueName: \"kubernetes.io/projected/8167ccc6-dd33-4103-903e-3eed5bbba124-kube-api-access-j27sh\") pod \"8167ccc6-dd33-4103-903e-3eed5bbba124\" (UID: \"8167ccc6-dd33-4103-903e-3eed5bbba124\") "
	Sep 20 22:24:22 addons-860203 kubelet[2333]: I0920 22:24:22.048484    2333 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zkvp\" (UniqueName: \"kubernetes.io/projected/1b3b7fa1-d8ab-4ede-bf68-c006a0d0c180-kube-api-access-8zkvp\") pod \"1b3b7fa1-d8ab-4ede-bf68-c006a0d0c180\" (UID: \"1b3b7fa1-d8ab-4ede-bf68-c006a0d0c180\") "
	Sep 20 22:24:22 addons-860203 kubelet[2333]: I0920 22:24:22.051407    2333 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3b7fa1-d8ab-4ede-bf68-c006a0d0c180-kube-api-access-8zkvp" (OuterVolumeSpecName: "kube-api-access-8zkvp") pod "1b3b7fa1-d8ab-4ede-bf68-c006a0d0c180" (UID: "1b3b7fa1-d8ab-4ede-bf68-c006a0d0c180"). InnerVolumeSpecName "kube-api-access-8zkvp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 22:24:22 addons-860203 kubelet[2333]: I0920 22:24:22.051794    2333 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8167ccc6-dd33-4103-903e-3eed5bbba124-kube-api-access-j27sh" (OuterVolumeSpecName: "kube-api-access-j27sh") pod "8167ccc6-dd33-4103-903e-3eed5bbba124" (UID: "8167ccc6-dd33-4103-903e-3eed5bbba124"). InnerVolumeSpecName "kube-api-access-j27sh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 22:24:22 addons-860203 kubelet[2333]: I0920 22:24:22.073984    2333 scope.go:117] "RemoveContainer" containerID="581bcc27681cda312ee74f866f6a716c042d83e3495786b1231b573f6260146f"
	Sep 20 22:24:22 addons-860203 kubelet[2333]: I0920 22:24:22.149556    2333 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j27sh\" (UniqueName: \"kubernetes.io/projected/8167ccc6-dd33-4103-903e-3eed5bbba124-kube-api-access-j27sh\") on node \"addons-860203\" DevicePath \"\""
	Sep 20 22:24:22 addons-860203 kubelet[2333]: I0920 22:24:22.149781    2333 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8zkvp\" (UniqueName: \"kubernetes.io/projected/1b3b7fa1-d8ab-4ede-bf68-c006a0d0c180-kube-api-access-8zkvp\") on node \"addons-860203\" DevicePath \"\""
	Sep 20 22:24:22 addons-860203 kubelet[2333]: I0920 22:24:22.160237    2333 scope.go:117] "RemoveContainer" containerID="581bcc27681cda312ee74f866f6a716c042d83e3495786b1231b573f6260146f"
	Sep 20 22:24:22 addons-860203 kubelet[2333]: E0920 22:24:22.162188    2333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 581bcc27681cda312ee74f866f6a716c042d83e3495786b1231b573f6260146f" containerID="581bcc27681cda312ee74f866f6a716c042d83e3495786b1231b573f6260146f"
	Sep 20 22:24:22 addons-860203 kubelet[2333]: I0920 22:24:22.162449    2333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"581bcc27681cda312ee74f866f6a716c042d83e3495786b1231b573f6260146f"} err="failed to get container status \"581bcc27681cda312ee74f866f6a716c042d83e3495786b1231b573f6260146f\": rpc error: code = Unknown desc = Error response from daemon: No such container: 581bcc27681cda312ee74f866f6a716c042d83e3495786b1231b573f6260146f"
	Sep 20 22:24:22 addons-860203 kubelet[2333]: I0920 22:24:22.162535    2333 scope.go:117] "RemoveContainer" containerID="1d40517a4d0780badfeaad14145f786c32f32e9e1aa39d93b521e5a6b185d84d"
	Sep 20 22:24:22 addons-860203 kubelet[2333]: I0920 22:24:22.192247    2333 scope.go:117] "RemoveContainer" containerID="1d40517a4d0780badfeaad14145f786c32f32e9e1aa39d93b521e5a6b185d84d"
	Sep 20 22:24:22 addons-860203 kubelet[2333]: E0920 22:24:22.193825    2333 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 1d40517a4d0780badfeaad14145f786c32f32e9e1aa39d93b521e5a6b185d84d" containerID="1d40517a4d0780badfeaad14145f786c32f32e9e1aa39d93b521e5a6b185d84d"
	Sep 20 22:24:22 addons-860203 kubelet[2333]: I0920 22:24:22.194000    2333 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"1d40517a4d0780badfeaad14145f786c32f32e9e1aa39d93b521e5a6b185d84d"} err="failed to get container status \"1d40517a4d0780badfeaad14145f786c32f32e9e1aa39d93b521e5a6b185d84d\": rpc error: code = Unknown desc = Error response from daemon: No such container: 1d40517a4d0780badfeaad14145f786c32f32e9e1aa39d93b521e5a6b185d84d"
	
	
	==> storage-provisioner [0a3c2a788d0b] <==
	I0920 22:11:44.772738       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 22:11:44.801097       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 22:11:44.801146       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 22:11:44.816288       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 22:11:44.816471       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-860203_b1dbb0fc-5860-471a-a80e-e4a41a5c5c0e!
	I0920 22:11:44.816565       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"42e63499-92a9-423a-98d8-5c0ba92a6934", APIVersion:"v1", ResourceVersion:"586", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-860203_b1dbb0fc-5860-471a-a80e-e4a41a5c5c0e became leader
	I0920 22:11:44.922023       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-860203_b1dbb0fc-5860-471a-a80e-e4a41a5c5c0e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-860203 -n addons-860203
helpers_test.go:261: (dbg) Run:  kubectl --context addons-860203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-2qtkk ingress-nginx-admission-patch-c6t65
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-860203 describe pod busybox ingress-nginx-admission-create-2qtkk ingress-nginx-admission-patch-c6t65
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-860203 describe pod busybox ingress-nginx-admission-create-2qtkk ingress-nginx-admission-patch-c6t65: exit status 1 (103.443326ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-860203/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 22:15:05 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5x6wk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-5x6wk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m19s                   default-scheduler  Successfully assigned default/busybox to addons-860203
	  Normal   Pulling    7m58s (x4 over 9m18s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m57s (x4 over 9m18s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m57s (x4 over 9m18s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m33s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m18s (x20 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2qtkk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-c6t65" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-860203 describe pod busybox ingress-nginx-admission-create-2qtkk ingress-nginx-admission-patch-c6t65: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.00s)

                                                
                                    

Test pass (318/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.84
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.42
9 TestDownloadOnly/v1.20.0/DeleteAll 0.38
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.22
12 TestDownloadOnly/v1.31.1/json-events 5.38
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
22 TestOffline 91.48
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 221.62
29 TestAddons/serial/Volcano 41.21
31 TestAddons/serial/GCPAuth/Namespaces 0.19
34 TestAddons/parallel/Ingress 21.44
35 TestAddons/parallel/InspektorGadget 11.7
36 TestAddons/parallel/MetricsServer 6.7
38 TestAddons/parallel/CSI 34.58
39 TestAddons/parallel/Headlamp 16.67
40 TestAddons/parallel/CloudSpanner 5.54
41 TestAddons/parallel/LocalPath 53.26
42 TestAddons/parallel/NvidiaDevicePlugin 6.69
43 TestAddons/parallel/Yakd 11.67
44 TestAddons/StoppedEnableDisable 6
45 TestCertOptions 46.61
46 TestCertExpiration 246.07
47 TestDockerFlags 37.03
48 TestForceSystemdFlag 46.33
49 TestForceSystemdEnv 39.9
55 TestErrorSpam/setup 30.48
56 TestErrorSpam/start 0.73
57 TestErrorSpam/status 1.01
58 TestErrorSpam/pause 1.47
59 TestErrorSpam/unpause 1.59
60 TestErrorSpam/stop 2.17
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 74.76
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 27.29
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.17
72 TestFunctional/serial/CacheCmd/cache/add_local 0.96
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.57
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
80 TestFunctional/serial/ExtraConfig 42.03
81 TestFunctional/serial/ComponentHealth 0.09
82 TestFunctional/serial/LogsCmd 1.18
83 TestFunctional/serial/LogsFileCmd 1.23
84 TestFunctional/serial/InvalidService 5.25
86 TestFunctional/parallel/ConfigCmd 0.49
87 TestFunctional/parallel/DashboardCmd 15.78
88 TestFunctional/parallel/DryRun 0.58
89 TestFunctional/parallel/InternationalLanguage 0.26
90 TestFunctional/parallel/StatusCmd 1.31
94 TestFunctional/parallel/ServiceCmdConnect 10.71
95 TestFunctional/parallel/AddonsCmd 0.21
96 TestFunctional/parallel/PersistentVolumeClaim 28.9
98 TestFunctional/parallel/SSHCmd 0.78
99 TestFunctional/parallel/CpCmd 1.98
101 TestFunctional/parallel/FileSync 0.34
102 TestFunctional/parallel/CertSync 2.18
106 TestFunctional/parallel/NodeLabels 0.11
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.28
110 TestFunctional/parallel/License 0.22
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.33
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
123 TestFunctional/parallel/ServiceCmd/List 0.49
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
127 TestFunctional/parallel/ProfileCmd/profile_list 0.52
128 TestFunctional/parallel/ServiceCmd/Format 0.55
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.64
130 TestFunctional/parallel/ServiceCmd/URL 0.55
131 TestFunctional/parallel/MountCmd/any-port 8.56
132 TestFunctional/parallel/MountCmd/specific-port 2.03
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.38
134 TestFunctional/parallel/Version/short 0.08
135 TestFunctional/parallel/Version/components 1.12
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.48
141 TestFunctional/parallel/ImageCommands/Setup 0.71
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.97
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.79
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.03
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.72
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.51
149 TestFunctional/parallel/DockerEnv/bash 1.31
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
153 TestFunctional/delete_echo-server_images 0.05
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 123.47
160 TestMultiControlPlane/serial/DeployApp 8.26
161 TestMultiControlPlane/serial/PingHostFromPods 1.72
162 TestMultiControlPlane/serial/AddWorkerNode 29.32
163 TestMultiControlPlane/serial/NodeLabels 0.17
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
165 TestMultiControlPlane/serial/CopyFile 19.27
166 TestMultiControlPlane/serial/StopSecondaryNode 11.92
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
168 TestMultiControlPlane/serial/RestartSecondaryNode 37.57
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.02
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 262.21
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.33
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
173 TestMultiControlPlane/serial/StopCluster 32.97
174 TestMultiControlPlane/serial/RestartCluster 103.27
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
176 TestMultiControlPlane/serial/AddSecondaryNode 50.32
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
180 TestImageBuild/serial/Setup 30.96
181 TestImageBuild/serial/NormalBuild 1.87
182 TestImageBuild/serial/BuildWithBuildArg 1.02
183 TestImageBuild/serial/BuildWithDockerIgnore 0.92
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.85
188 TestJSONOutput/start/Command 77.95
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.6
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.52
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.97
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.22
213 TestKicCustomNetwork/create_custom_network 35.13
214 TestKicCustomNetwork/use_default_bridge_network 34.37
215 TestKicExistingNetwork 36.13
216 TestKicCustomSubnet 34.11
217 TestKicStaticIP 33.57
218 TestMainNoArgs 0.05
219 TestMinikubeProfile 67.65
222 TestMountStart/serial/StartWithMountFirst 7.94
223 TestMountStart/serial/VerifyMountFirst 0.27
224 TestMountStart/serial/StartWithMountSecond 7.47
225 TestMountStart/serial/VerifyMountSecond 0.26
226 TestMountStart/serial/DeleteFirst 1.49
227 TestMountStart/serial/VerifyMountPostDelete 0.25
228 TestMountStart/serial/Stop 1.2
229 TestMountStart/serial/RestartStopped 8.53
230 TestMountStart/serial/VerifyMountPostStop 0.26
233 TestMultiNode/serial/FreshStart2Nodes 85.12
234 TestMultiNode/serial/DeployApp2Nodes 44.51
235 TestMultiNode/serial/PingHostFrom2Pods 1.06
236 TestMultiNode/serial/AddNode 18.68
237 TestMultiNode/serial/MultiNodeLabels 0.09
238 TestMultiNode/serial/ProfileList 0.72
239 TestMultiNode/serial/CopyFile 10.24
240 TestMultiNode/serial/StopNode 2.22
241 TestMultiNode/serial/StartAfterStop 11.1
242 TestMultiNode/serial/RestartKeepsNodes 97.01
243 TestMultiNode/serial/DeleteNode 5.68
244 TestMultiNode/serial/StopMultiNode 21.57
245 TestMultiNode/serial/RestartMultiNode 62.36
246 TestMultiNode/serial/ValidateNameConflict 34.65
251 TestPreload 106.37
253 TestScheduledStopUnix 104.7
254 TestSkaffold 118.13
256 TestInsufficientStorage 10.97
257 TestRunningBinaryUpgrade 78.95
259 TestKubernetesUpgrade 378.65
260 TestMissingContainerUpgrade 162.19
262 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
263 TestNoKubernetes/serial/StartWithK8s 43.6
264 TestNoKubernetes/serial/StartWithStopK8s 18.55
265 TestNoKubernetes/serial/Start 9.6
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
267 TestNoKubernetes/serial/ProfileList 1.08
268 TestNoKubernetes/serial/Stop 1.22
269 TestNoKubernetes/serial/StartNoArgs 8.09
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
271 TestStoppedBinaryUpgrade/Setup 0.58
272 TestStoppedBinaryUpgrade/Upgrade 86.9
273 TestStoppedBinaryUpgrade/MinikubeLogs 1.59
282 TestPause/serial/Start 84.45
283 TestPause/serial/SecondStartNoReconfiguration 31.92
295 TestPause/serial/Pause 0.76
296 TestPause/serial/VerifyStatus 0.38
297 TestPause/serial/Unpause 0.68
298 TestPause/serial/PauseAgain 0.96
299 TestPause/serial/DeletePaused 2.39
300 TestPause/serial/VerifyDeletedResources 5.23
302 TestStartStop/group/old-k8s-version/serial/FirstStart 140.48
303 TestStartStop/group/old-k8s-version/serial/DeployApp 8.57
304 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.13
305 TestStartStop/group/old-k8s-version/serial/Stop 10.9
306 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/old-k8s-version/serial/SecondStart 145.39
309 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.35
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.38
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.97
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 270.7
316 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
317 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
318 TestStartStop/group/old-k8s-version/serial/Pause 3.94
320 TestStartStop/group/embed-certs/serial/FirstStart 48.68
321 TestStartStop/group/embed-certs/serial/DeployApp 9.37
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.14
323 TestStartStop/group/embed-certs/serial/Stop 10.99
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
325 TestStartStop/group/embed-certs/serial/SecondStart 266.85
326 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
327 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
328 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
329 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.96
331 TestStartStop/group/no-preload/serial/FirstStart 52.85
332 TestStartStop/group/no-preload/serial/DeployApp 10.44
333 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
334 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
335 TestStartStop/group/no-preload/serial/Stop 10.9
336 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
337 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
338 TestStartStop/group/embed-certs/serial/Pause 3.95
339 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.32
340 TestStartStop/group/no-preload/serial/SecondStart 295.35
342 TestStartStop/group/newest-cni/serial/FirstStart 44.32
343 TestStartStop/group/newest-cni/serial/DeployApp 0
344 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
345 TestStartStop/group/newest-cni/serial/Stop 11.16
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
347 TestStartStop/group/newest-cni/serial/SecondStart 17.64
348 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
350 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
351 TestStartStop/group/newest-cni/serial/Pause 3.01
352 TestNetworkPlugins/group/auto/Start 44.75
353 TestNetworkPlugins/group/auto/KubeletFlags 0.31
354 TestNetworkPlugins/group/auto/NetCatPod 10.3
355 TestNetworkPlugins/group/auto/DNS 0.23
356 TestNetworkPlugins/group/auto/Localhost 0.17
357 TestNetworkPlugins/group/auto/HairPin 0.2
358 TestNetworkPlugins/group/kindnet/Start 66.73
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
361 TestNetworkPlugins/group/kindnet/NetCatPod 9.28
362 TestNetworkPlugins/group/kindnet/DNS 0.23
363 TestNetworkPlugins/group/kindnet/Localhost 0.19
364 TestNetworkPlugins/group/kindnet/HairPin 0.2
365 TestNetworkPlugins/group/calico/Start 76.02
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
367 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
368 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
369 TestStartStop/group/no-preload/serial/Pause 4.21
370 TestNetworkPlugins/group/custom-flannel/Start 65
371 TestNetworkPlugins/group/calico/ControllerPod 6.01
372 TestNetworkPlugins/group/calico/KubeletFlags 0.4
373 TestNetworkPlugins/group/calico/NetCatPod 12.55
374 TestNetworkPlugins/group/calico/DNS 0.29
375 TestNetworkPlugins/group/calico/Localhost 0.31
376 TestNetworkPlugins/group/calico/HairPin 0.45
377 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
378 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.37
379 TestNetworkPlugins/group/false/Start 81.94
380 TestNetworkPlugins/group/custom-flannel/DNS 0.26
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
383 TestNetworkPlugins/group/enable-default-cni/Start 80.04
384 TestNetworkPlugins/group/false/KubeletFlags 0.3
385 TestNetworkPlugins/group/false/NetCatPod 11.27
386 TestNetworkPlugins/group/false/DNS 0.22
387 TestNetworkPlugins/group/false/Localhost 0.17
388 TestNetworkPlugins/group/false/HairPin 0.17
389 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
390 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.43
391 TestNetworkPlugins/group/flannel/Start 60.26
392 TestNetworkPlugins/group/enable-default-cni/DNS 0.45
393 TestNetworkPlugins/group/enable-default-cni/Localhost 0.28
394 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
395 TestNetworkPlugins/group/bridge/Start 82.93
396 TestNetworkPlugins/group/flannel/ControllerPod 6.01
397 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
398 TestNetworkPlugins/group/flannel/NetCatPod 12.3
399 TestNetworkPlugins/group/flannel/DNS 0.19
400 TestNetworkPlugins/group/flannel/Localhost 0.18
401 TestNetworkPlugins/group/flannel/HairPin 0.17
402 TestNetworkPlugins/group/kubenet/Start 75.4
403 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
404 TestNetworkPlugins/group/bridge/NetCatPod 11.35
405 TestNetworkPlugins/group/bridge/DNS 0.28
406 TestNetworkPlugins/group/bridge/Localhost 0.2
407 TestNetworkPlugins/group/bridge/HairPin 0.2
408 TestNetworkPlugins/group/kubenet/KubeletFlags 0.28
409 TestNetworkPlugins/group/kubenet/NetCatPod 10.27
410 TestNetworkPlugins/group/kubenet/DNS 0.18
411 TestNetworkPlugins/group/kubenet/Localhost 0.16
412 TestNetworkPlugins/group/kubenet/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (15.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-837951 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-837951 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (15.838185975s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (15.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 22:10:34.470715 1436493 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0920 22:10:34.470808 1436493 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-1431110/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-837951
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-837951: exit status 85 (416.767623ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-837951 | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |          |
	|         | -p download-only-837951        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 22:10:18
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 22:10:18.676571 1436498 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:10:18.676760 1436498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:10:18.676768 1436498 out.go:358] Setting ErrFile to fd 2...
	I0920 22:10:18.676773 1436498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:10:18.677020 1436498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
	W0920 22:10:18.681309 1436498 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19672-1431110/.minikube/config/config.json: open /home/jenkins/minikube-integration/19672-1431110/.minikube/config/config.json: no such file or directory
	I0920 22:10:18.681864 1436498 out.go:352] Setting JSON to true
	I0920 22:10:18.683276 1436498 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21170,"bootTime":1726849049,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0920 22:10:18.683394 1436498 start.go:139] virtualization:  
	I0920 22:10:18.685559 1436498 out.go:97] [download-only-837951] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0920 22:10:18.685713 1436498 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-1431110/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 22:10:18.685814 1436498 notify.go:220] Checking for updates...
	I0920 22:10:18.687420 1436498 out.go:169] MINIKUBE_LOCATION=19672
	I0920 22:10:18.688680 1436498 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:10:18.690355 1436498 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-1431110/kubeconfig
	I0920 22:10:18.691580 1436498 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-1431110/.minikube
	I0920 22:10:18.692611 1436498 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 22:10:18.694631 1436498 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 22:10:18.694897 1436498 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:10:18.715975 1436498 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 22:10:18.716120 1436498 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 22:10:18.781402 1436498 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 22:10:18.771359015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 22:10:18.781522 1436498 docker.go:318] overlay module found
	I0920 22:10:18.783235 1436498 out.go:97] Using the docker driver based on user configuration
	I0920 22:10:18.783262 1436498 start.go:297] selected driver: docker
	I0920 22:10:18.783273 1436498 start.go:901] validating driver "docker" against <nil>
	I0920 22:10:18.783390 1436498 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 22:10:18.833449 1436498 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 22:10:18.82349502 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 22:10:18.833660 1436498 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 22:10:18.833963 1436498 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 22:10:18.834130 1436498 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 22:10:18.836178 1436498 out.go:169] Using Docker driver with root privileges
	I0920 22:10:18.837340 1436498 cni.go:84] Creating CNI manager for ""
	I0920 22:10:18.837409 1436498 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 22:10:18.837502 1436498 start.go:340] cluster config:
	{Name:download-only-837951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-837951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:10:18.838738 1436498 out.go:97] Starting "download-only-837951" primary control-plane node in "download-only-837951" cluster
	I0920 22:10:18.838757 1436498 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 22:10:18.839966 1436498 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0920 22:10:18.839988 1436498 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 22:10:18.840166 1436498 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 22:10:18.855940 1436498 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 22:10:18.856137 1436498 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 22:10:18.856240 1436498 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 22:10:18.897741 1436498 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 22:10:18.897772 1436498 cache.go:56] Caching tarball of preloaded images
	I0920 22:10:18.897938 1436498 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 22:10:18.900066 1436498 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 22:10:18.900103 1436498 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 22:10:18.983304 1436498 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19672-1431110/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0920 22:10:23.114697 1436498 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 22:10:23.114891 1436498 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19672-1431110/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0920 22:10:24.175452 1436498 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 22:10:24.175966 1436498 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/download-only-837951/config.json ...
	I0920 22:10:24.176019 1436498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/download-only-837951/config.json: {Name:mk9d8f3d0245dbc13d5292ea1e03117696131084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:10:24.176239 1436498 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 22:10:24.177117 1436498 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19672-1431110/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-837951 host does not exist
	  To start a cluster, run: "minikube start -p download-only-837951"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-837951
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-363766 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-363766 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.377201058s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 22:10:40.872910 1436493 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 22:10:40.872953 1436493 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-1431110/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-363766
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-363766: exit status 85 (65.393037ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-837951 | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | -p download-only-837951        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| delete  | -p download-only-837951        | download-only-837951 | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| start   | -o=json --download-only        | download-only-363766 | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | -p download-only-363766        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 22:10:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 22:10:35.543244 1436697 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:10:35.543383 1436697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:10:35.543394 1436697 out.go:358] Setting ErrFile to fd 2...
	I0920 22:10:35.543399 1436697 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:10:35.543648 1436697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
	I0920 22:10:35.544052 1436697 out.go:352] Setting JSON to true
	I0920 22:10:35.544967 1436697 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":21187,"bootTime":1726849049,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0920 22:10:35.545039 1436697 start.go:139] virtualization:  
	I0920 22:10:35.546840 1436697 out.go:97] [download-only-363766] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 22:10:35.547060 1436697 notify.go:220] Checking for updates...
	I0920 22:10:35.548219 1436697 out.go:169] MINIKUBE_LOCATION=19672
	I0920 22:10:35.549388 1436697 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:10:35.550715 1436697 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-1431110/kubeconfig
	I0920 22:10:35.551901 1436697 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-1431110/.minikube
	I0920 22:10:35.553077 1436697 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 22:10:35.556042 1436697 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 22:10:35.556352 1436697 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:10:35.578292 1436697 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 22:10:35.578406 1436697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 22:10:35.633860 1436697 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 22:10:35.623349656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 22:10:35.633973 1436697 docker.go:318] overlay module found
	I0920 22:10:35.635322 1436697 out.go:97] Using the docker driver based on user configuration
	I0920 22:10:35.635345 1436697 start.go:297] selected driver: docker
	I0920 22:10:35.635352 1436697 start.go:901] validating driver "docker" against <nil>
	I0920 22:10:35.635459 1436697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 22:10:35.680671 1436697 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 22:10:35.671355334 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 22:10:35.680886 1436697 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 22:10:35.681151 1436697 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 22:10:35.681318 1436697 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 22:10:35.683188 1436697 out.go:169] Using Docker driver with root privileges
	I0920 22:10:35.684959 1436697 cni.go:84] Creating CNI manager for ""
	I0920 22:10:35.685039 1436697 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 22:10:35.685052 1436697 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 22:10:35.685137 1436697 start.go:340] cluster config:
	{Name:download-only-363766 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-363766 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:10:35.686627 1436697 out.go:97] Starting "download-only-363766" primary control-plane node in "download-only-363766" cluster
	I0920 22:10:35.686648 1436697 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 22:10:35.688240 1436697 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0920 22:10:35.688271 1436697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 22:10:35.688377 1436697 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 22:10:35.704138 1436697 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 22:10:35.704280 1436697 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 22:10:35.704302 1436697 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 22:10:35.704307 1436697 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 22:10:35.704314 1436697 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 22:10:35.747747 1436697 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 22:10:35.747778 1436697 cache.go:56] Caching tarball of preloaded images
	I0920 22:10:35.747933 1436697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 22:10:35.749962 1436697 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 22:10:35.749983 1436697 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0920 22:10:35.837803 1436697 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /home/jenkins/minikube-integration/19672-1431110/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0920 22:10:39.427407 1436697 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0920 22:10:39.427516 1436697 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19672-1431110/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0920 22:10:40.189229 1436697 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 22:10:40.189629 1436697 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/download-only-363766/config.json ...
	I0920 22:10:40.189665 1436697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/download-only-363766/config.json: {Name:mk945e763bf1cbd85876025f43e4fcd92f779e15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:10:40.189870 1436697 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 22:10:40.190769 1436697 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19672-1431110/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-363766 host does not exist
	  To start a cluster, run: "minikube start -p download-only-363766"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-363766
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 22:10:42.048339 1436493 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-911328 --alsologtostderr --binary-mirror http://127.0.0.1:45811 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-911328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-911328
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (91.48s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-232064 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-232064 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m28.574881214s)
helpers_test.go:175: Cleaning up "offline-docker-232064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-232064
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-232064: (2.903569877s)
--- PASS: TestOffline (91.48s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-860203
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-860203: exit status 85 (60.431035ms)

                                                
                                                
-- stdout --
	* Profile "addons-860203" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-860203"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-860203
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-860203: exit status 85 (64.707398ms)

                                                
                                                
-- stdout --
	* Profile "addons-860203" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-860203"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (221.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-860203 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-860203 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m41.620529086s)
--- PASS: TestAddons/Setup (221.62s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.21s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:851: volcano-controller stabilized in 48.267018ms
addons_test.go:843: volcano-admission stabilized in 48.75929ms
addons_test.go:835: volcano-scheduler stabilized in 49.023292ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-fw6cg" [d66eac41-e6b2-4471-8eb3-c5dc0159c550] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004257892s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-ggzt9" [b47c04b0-294b-4bd4-bf09-458e0572df48] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004751864s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-m9b82" [dd816a76-c3b2-4288-a3ca-6629726bd54b] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003831535s
addons_test.go:870: (dbg) Run:  kubectl --context addons-860203 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-860203 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-860203 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [61dd4d7e-0198-4193-94f9-8f71c4d393cf] Pending
helpers_test.go:344: "test-job-nginx-0" [61dd4d7e-0198-4193-94f9-8f71c4d393cf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [61dd4d7e-0198-4193-94f9-8f71c4d393cf] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003733085s
addons_test.go:906: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-arm64 -p addons-860203 addons disable volcano --alsologtostderr -v=1: (10.561346241s)
--- PASS: TestAddons/serial/Volcano (41.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-860203 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-860203 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-860203 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-860203 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-860203 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b1ddce0f-84aa-4e65-bdf8-3b5b54f4fb61] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b1ddce0f-84aa-4e65-bdf8-3b5b54f4fb61] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003661617s
I0920 22:24:29.510156 1436493 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-860203 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-860203 addons disable ingress-dns --alsologtostderr -v=1: (1.8123377s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-860203 addons disable ingress --alsologtostderr -v=1: (7.868187912s)
--- PASS: TestAddons/parallel/Ingress (21.44s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2bq7q" [cf8eac4e-5d9a-4e5b-bdfd-d2a69bc9f2c5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004656572s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-860203
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-860203: (5.696379996s)
--- PASS: TestAddons/parallel/InspektorGadget (11.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.301784ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-px9x8" [dc37fd84-501c-485d-a0df-4e3a4e3d94e7] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00417844s
addons_test.go:413: (dbg) Run:  kubectl --context addons-860203 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (34.58s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 22:23:25.874606 1436493 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 22:23:25.879359 1436493 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 22:23:25.879392 1436493 kapi.go:107] duration metric: took 7.162202ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.171433ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-860203 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-860203 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3d5712cb-6935-4635-9697-ce5b8e14b258] Pending
helpers_test.go:344: "task-pv-pod" [3d5712cb-6935-4635-9697-ce5b8e14b258] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3d5712cb-6935-4635-9697-ce5b8e14b258] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004164998s
addons_test.go:528: (dbg) Run:  kubectl --context addons-860203 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-860203 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-860203 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-860203 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-860203 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-860203 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-860203 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [dfc2b424-dabb-409c-9e03-e9e5f85bc4cf] Pending
helpers_test.go:344: "task-pv-pod-restore" [dfc2b424-dabb-409c-9e03-e9e5f85bc4cf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [dfc2b424-dabb-409c-9e03-e9e5f85bc4cf] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003812363s
addons_test.go:570: (dbg) Run:  kubectl --context addons-860203 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-860203 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-860203 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-860203 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.641864206s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (34.58s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-860203 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-pslgs" [e88910d8-59cf-4983-a518-a558cb4ee3ce] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-pslgs" [e88910d8-59cf-4983-a518-a558cb4ee3ce] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-pslgs" [e88910d8-59cf-4983-a518-a558cb4ee3ce] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003442505s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-860203 addons disable headlamp --alsologtostderr -v=1: (5.745331067s)
--- PASS: TestAddons/parallel/Headlamp (16.67s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-4vnl6" [30ace013-68dc-485d-89e2-77591694449b] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004586362s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-860203
--- PASS: TestAddons/parallel/CloudSpanner (5.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.26s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-860203 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-860203 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-860203 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [92c8ab46-c735-4ec5-bbe0-ea5ba052f847] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [92c8ab46-c735-4ec5-bbe0-ea5ba052f847] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [92c8ab46-c735-4ec5-bbe0-ea5ba052f847] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003865234s
addons_test.go:938: (dbg) Run:  kubectl --context addons-860203 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 ssh "cat /opt/local-path-provisioner/pvc-166635a0-0201-4c8a-9abf-4eb4b8c5429c_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-860203 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-860203 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-860203 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.165665663s)
--- PASS: TestAddons/parallel/LocalPath (53.26s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-wlbkx" [a7d9294a-f162-4ea1-8e95-8df04a4f0793] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006507361s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-860203
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.69s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-96gz9" [ddab9798-7102-4902-8a22-3bff28c19173] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003720074s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-860203 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-860203 addons disable yakd --alsologtostderr -v=1: (5.666744792s)
--- PASS: TestAddons/parallel/Yakd (11.67s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-860203
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-860203: (5.734302113s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-860203
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-860203
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-860203
--- PASS: TestAddons/StoppedEnableDisable (6.00s)

                                                
                                    
x
+
TestCertOptions (46.61s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-073621 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0920 23:10:20.326351 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-073621 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (43.193933381s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-073621 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-073621 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-073621 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-073621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-073621
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-073621: (2.632564428s)
--- PASS: TestCertOptions (46.61s)

                                                
                                    
x
+
TestCertExpiration (246.07s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-558379 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-558379 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (39.669498356s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-558379 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0920 23:14:24.326384 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-558379 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (24.055650599s)
helpers_test.go:175: Cleaning up "cert-expiration-558379" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-558379
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-558379: (2.345543124s)
--- PASS: TestCertExpiration (246.07s)

                                                
                                    
x
+
TestDockerFlags (37.03s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-993837 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0920 23:09:01.021459 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-993837 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.285628945s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-993837 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-993837 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-993837" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-993837
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-993837: (2.062598695s)
--- PASS: TestDockerFlags (37.03s)

                                                
                                    
x
+
TestForceSystemdFlag (46.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-657191 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-657191 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.12390808s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-657191 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-657191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-657191
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-657191: (2.463272952s)
--- PASS: TestForceSystemdFlag (46.33s)

                                                
                                    
x
+
TestForceSystemdEnv (39.9s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-205331 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0920 23:09:24.326684 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-205331 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (36.716117418s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-205331 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-205331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-205331
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-205331: (2.733355896s)
--- PASS: TestForceSystemdEnv (39.90s)

                                                
                                    
x
+
TestErrorSpam/setup (30.48s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-416500 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-416500 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-416500 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-416500 --driver=docker  --container-runtime=docker: (30.48416198s)
--- PASS: TestErrorSpam/setup (30.48s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 unpause
--- PASS: TestErrorSpam/unpause (1.59s)

                                                
                                    
x
+
TestErrorSpam/stop (2.17s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 stop: (1.968498058s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-416500 --log_dir /tmp/nospam-416500 stop
--- PASS: TestErrorSpam/stop (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19672-1431110/.minikube/files/etc/test/nested/copy/1436493/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.76s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-866813 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-866813 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m14.754331009s)
--- PASS: TestFunctional/serial/StartWithProxy (74.76s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 22:27:36.512098 1436493 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-866813 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-866813 --alsologtostderr -v=8: (27.283437832s)
functional_test.go:663: soft start took 27.286817547s for "functional-866813" cluster.
I0920 22:28:03.795863 1436493 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (27.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-866813 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-866813 cache add registry.k8s.io/pause:3.1: (1.099950683s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-866813 cache add registry.k8s.io/pause:3.3: (1.129174392s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-866813 /tmp/TestFunctionalserialCacheCmdcacheadd_local195201728/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 cache add minikube-local-cache-test:functional-866813
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 cache delete minikube-local-cache-test:functional-866813
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-866813
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866813 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (284.69866ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 kubectl -- --context functional-866813 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-866813 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-866813 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-866813 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.02671936s)
functional_test.go:761: restart took 42.026826582s for "functional-866813" cluster.
I0920 22:28:52.487931 1436493 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (42.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-866813 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-866813 logs: (1.182264121s)
--- PASS: TestFunctional/serial/LogsCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 logs --file /tmp/TestFunctionalserialLogsFileCmd920882436/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-866813 logs --file /tmp/TestFunctionalserialLogsFileCmd920882436/001/logs.txt: (1.22897655s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-866813 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-866813
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-866813: exit status 115 (697.533523ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31420 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-866813 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-866813 delete -f testdata/invalidsvc.yaml: (1.282208199s)
--- PASS: TestFunctional/serial/InvalidService (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866813 config get cpus: exit status 14 (78.858032ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866813 config get cpus: exit status 14 (77.679836ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-866813 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-866813 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1477534: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.78s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-866813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-866813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (248.70297ms)

                                                
                                                
-- stdout --
	* [functional-866813] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-1431110/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-1431110/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 22:29:33.281932 1477127 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:29:33.282155 1477127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:29:33.282163 1477127 out.go:358] Setting ErrFile to fd 2...
	I0920 22:29:33.282169 1477127 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:29:33.282394 1477127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
	I0920 22:29:33.282732 1477127 out.go:352] Setting JSON to false
	I0920 22:29:33.283770 1477127 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22325,"bootTime":1726849049,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0920 22:29:33.283829 1477127 start.go:139] virtualization:  
	I0920 22:29:33.286850 1477127 out.go:177] * [functional-866813] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 22:29:33.291763 1477127 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:29:33.291973 1477127 notify.go:220] Checking for updates...
	I0920 22:29:33.297212 1477127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:29:33.299779 1477127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-1431110/kubeconfig
	I0920 22:29:33.303059 1477127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-1431110/.minikube
	I0920 22:29:33.305859 1477127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 22:29:33.308367 1477127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:29:33.311471 1477127 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 22:29:33.311978 1477127 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:29:33.353712 1477127 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 22:29:33.353868 1477127 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 22:29:33.442603 1477127 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 22:29:33.430539538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 22:29:33.442720 1477127 docker.go:318] overlay module found
	I0920 22:29:33.447095 1477127 out.go:177] * Using the docker driver based on existing profile
	I0920 22:29:33.449591 1477127 start.go:297] selected driver: docker
	I0920 22:29:33.449613 1477127 start.go:901] validating driver "docker" against &{Name:functional-866813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-866813 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:29:33.449727 1477127 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:29:33.453001 1477127 out.go:201] 
	W0920 22:29:33.456249 1477127 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 22:29:33.461744 1477127 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-866813 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-866813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-866813 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (262.862863ms)

                                                
                                                
-- stdout --
	* [functional-866813] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-1431110/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-1431110/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 22:29:33.040139 1477028 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:29:33.040381 1477028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:29:33.040412 1477028 out.go:358] Setting ErrFile to fd 2...
	I0920 22:29:33.040433 1477028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:29:33.041419 1477028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
	I0920 22:29:33.042094 1477028 out.go:352] Setting JSON to false
	I0920 22:29:33.043384 1477028 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":22324,"bootTime":1726849049,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0920 22:29:33.043503 1477028 start.go:139] virtualization:  
	I0920 22:29:33.047629 1477028 out.go:177] * [functional-866813] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0920 22:29:33.053128 1477028 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:29:33.053200 1477028 notify.go:220] Checking for updates...
	I0920 22:29:33.059480 1477028 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:29:33.062977 1477028 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-1431110/kubeconfig
	I0920 22:29:33.066116 1477028 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-1431110/.minikube
	I0920 22:29:33.068626 1477028 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 22:29:33.071232 1477028 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:29:33.074607 1477028 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 22:29:33.075139 1477028 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:29:33.125652 1477028 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 22:29:33.125782 1477028 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 22:29:33.201263 1477028 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 22:29:33.186350722 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 22:29:33.201372 1477028 docker.go:318] overlay module found
	I0920 22:29:33.203990 1477028 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 22:29:33.208236 1477028 start.go:297] selected driver: docker
	I0920 22:29:33.208256 1477028 start.go:901] validating driver "docker" against &{Name:functional-866813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-866813 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:29:33.208355 1477028 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:29:33.211478 1477028 out.go:201] 
	W0920 22:29:33.214012 1477028 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 22:29:33.216453 1477028 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-866813 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-866813 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-wqrsq" [ac98f899-bb73-476a-96e3-80150a417221] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-wqrsq" [ac98f899-bb73-476a-96e3-80150a417221] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003786416s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32456
functional_test.go:1675: http://192.168.49.2:32456: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-wqrsq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32456
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [12fc1d2a-f9ad-4638-8ebb-03a538c725c3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003671109s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-866813 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-866813 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-866813 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-866813 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6c5f3ad3-c08d-4248-a4a3-5d55f6655b72] Pending
helpers_test.go:344: "sp-pod" [6c5f3ad3-c08d-4248-a4a3-5d55f6655b72] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6c5f3ad3-c08d-4248-a4a3-5d55f6655b72] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003575577s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-866813 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-866813 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-866813 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [63181316-86aa-4597-9d01-b104e9570fdc] Pending
helpers_test.go:344: "sp-pod" [63181316-86aa-4597-9d01-b104e9570fdc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [63181316-86aa-4597-9d01-b104e9570fdc] Running
E0920 22:29:24.329088 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:29:24.335468 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:29:24.346817 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:29:24.368251 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:29:24.409624 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:29:24.491102 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:29:24.652678 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:29:24.974430 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:29:25.615876 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:29:26.897496 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003644291s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-866813 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh -n functional-866813 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 cp functional-866813:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4201153188/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh -n functional-866813 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh -n functional-866813 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1436493/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "sudo cat /etc/test/nested/copy/1436493/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1436493.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "sudo cat /etc/ssl/certs/1436493.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1436493.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "sudo cat /usr/share/ca-certificates/1436493.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14364932.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "sudo cat /etc/ssl/certs/14364932.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14364932.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "sudo cat /usr/share/ca-certificates/14364932.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-866813 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "sudo systemctl is-active crio"
E0920 22:29:44.823453 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866813 ssh "sudo systemctl is-active crio": exit status 1 (279.321411ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-866813 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-866813 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-866813 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-866813 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1474537: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-866813 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-866813 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e37deb3a-9969-41b0-a4ed-306cf8f65436] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e37deb3a-9969-41b0-a4ed-306cf8f65436] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003931076s
I0920 22:29:10.679431 1436493 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-866813 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.49.180 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-866813 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-866813 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-866813 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-dtpdf" [3aa2ddc4-54c9-447d-9c54-80d355cc9394] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-dtpdf" [3aa2ddc4-54c9-447d-9c54-80d355cc9394] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003277466s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 service list
E0920 22:29:29.459677 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 service list -o json
functional_test.go:1494: Took "536.262407ms" to run "out/minikube-linux-arm64 -p functional-866813 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30216
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "438.816709ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "82.484272ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "546.138344ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "97.294579ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30216
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-866813 /tmp/TestFunctionalparallelMountCmdany-port45532087/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726871371608185175" to /tmp/TestFunctionalparallelMountCmdany-port45532087/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726871371608185175" to /tmp/TestFunctionalparallelMountCmdany-port45532087/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726871371608185175" to /tmp/TestFunctionalparallelMountCmdany-port45532087/001/test-1726871371608185175
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866813 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (444.839638ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 22:29:32.053961 1436493 retry.go:31] will retry after 535.208263ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 22:29 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 22:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 22:29 test-1726871371608185175
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh cat /mount-9p/test-1726871371608185175
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-866813 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e65c72db-46b1-43b0-9b25-ee129629abfb] Pending
E0920 22:29:34.581346 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [e65c72db-46b1-43b0-9b25-ee129629abfb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e65c72db-46b1-43b0-9b25-ee129629abfb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e65c72db-46b1-43b0-9b25-ee129629abfb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003194059s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-866813 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-866813 /tmp/TestFunctionalparallelMountCmdany-port45532087/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-866813 /tmp/TestFunctionalparallelMountCmdspecific-port3331090296/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866813 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (443.035001ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 22:29:40.614229 1436493 retry.go:31] will retry after 371.422102ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-866813 /tmp/TestFunctionalparallelMountCmdspecific-port3331090296/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866813 ssh "sudo umount -f /mount-9p": exit status 1 (327.091145ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-866813 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-866813 /tmp/TestFunctionalparallelMountCmdspecific-port3331090296/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-866813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3680892561/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-866813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3680892561/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-866813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3680892561/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866813 ssh "findmnt -T" /mount1: exit status 1 (920.659569ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 22:29:43.126287 1436493 retry.go:31] will retry after 336.904224ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-866813 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-866813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3680892561/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-866813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3680892561/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-866813 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3680892561/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-866813 version -o=json --components: (1.11875523s)
--- PASS: TestFunctional/parallel/Version/components (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-866813 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-866813
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-866813
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-866813 image ls --format short --alsologtostderr:
I0920 22:29:53.470318 1480605 out.go:345] Setting OutFile to fd 1 ...
I0920 22:29:53.470428 1480605 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 22:29:53.470438 1480605 out.go:358] Setting ErrFile to fd 2...
I0920 22:29:53.470443 1480605 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 22:29:53.470690 1480605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
I0920 22:29:53.471360 1480605 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 22:29:53.471481 1480605 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 22:29:53.471963 1480605 cli_runner.go:164] Run: docker container inspect functional-866813 --format={{.State.Status}}
I0920 22:29:53.489614 1480605 ssh_runner.go:195] Run: systemctl --version
I0920 22:29:53.489676 1480605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-866813
I0920 22:29:53.513911 1480605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/functional-866813/id_rsa Username:docker}
I0920 22:29:53.616706 1480605 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-866813 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-866813 | 15d09f45873fb | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-866813 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-866813 image ls --format table --alsologtostderr:
I0920 22:29:54.230582 1480827 out.go:345] Setting OutFile to fd 1 ...
I0920 22:29:54.230815 1480827 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 22:29:54.230832 1480827 out.go:358] Setting ErrFile to fd 2...
I0920 22:29:54.230841 1480827 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 22:29:54.231245 1480827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
I0920 22:29:54.232201 1480827 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 22:29:54.232436 1480827 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 22:29:54.233159 1480827 cli_runner.go:164] Run: docker container inspect functional-866813 --format={{.State.Status}}
I0920 22:29:54.257951 1480827 ssh_runner.go:195] Run: systemctl --version
I0920 22:29:54.258025 1480827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-866813
I0920 22:29:54.283929 1480827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/functional-866813/id_rsa Username:docker}
I0920 22:29:54.376895 1480827 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-866813 image ls --format json --alsologtostderr:
[{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"b887aca7aed6
134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker
.io/kicbase/echo-server:functional-866813"],"size":"4780000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"15d09f45873fb7d4fe0b80276eb4900f07d4ca6f48d132ec6f5ab3d5872e8172","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-866813"],"size":"30"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"s
ize":"3550000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-866813 image ls --format json --alsologtostderr:
I0920 22:29:53.973428 1480741 out.go:345] Setting OutFile to fd 1 ...
I0920 22:29:53.973592 1480741 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 22:29:53.973598 1480741 out.go:358] Setting ErrFile to fd 2...
I0920 22:29:53.973641 1480741 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 22:29:53.973962 1480741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
I0920 22:29:53.974657 1480741 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 22:29:53.974779 1480741 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 22:29:53.975282 1480741 cli_runner.go:164] Run: docker container inspect functional-866813 --format={{.State.Status}}
I0920 22:29:54.012332 1480741 ssh_runner.go:195] Run: systemctl --version
I0920 22:29:54.012402 1480741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-866813
I0920 22:29:54.031155 1480741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/functional-866813/id_rsa Username:docker}
I0920 22:29:54.132496 1480741 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-866813 image ls --format yaml --alsologtostderr:
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 15d09f45873fb7d4fe0b80276eb4900f07d4ca6f48d132ec6f5ab3d5872e8172
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-866813
size: "30"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-866813
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-866813 image ls --format yaml --alsologtostderr:
I0920 22:29:53.717260 1480677 out.go:345] Setting OutFile to fd 1 ...
I0920 22:29:53.717474 1480677 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 22:29:53.717542 1480677 out.go:358] Setting ErrFile to fd 2...
I0920 22:29:53.717564 1480677 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 22:29:53.717947 1480677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
I0920 22:29:53.719008 1480677 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 22:29:53.719215 1480677 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 22:29:53.719852 1480677 cli_runner.go:164] Run: docker container inspect functional-866813 --format={{.State.Status}}
I0920 22:29:53.737141 1480677 ssh_runner.go:195] Run: systemctl --version
I0920 22:29:53.737193 1480677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-866813
I0920 22:29:53.757854 1480677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/functional-866813/id_rsa Username:docker}
I0920 22:29:53.856723 1480677 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-866813 ssh pgrep buildkitd: exit status 1 (366.449286ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image build -t localhost/my-image:functional-866813 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-866813 image build -t localhost/my-image:functional-866813 testdata/build --alsologtostderr: (2.906421825s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-866813 image build -t localhost/my-image:functional-866813 testdata/build --alsologtostderr:
I0920 22:29:54.278117 1480833 out.go:345] Setting OutFile to fd 1 ...
I0920 22:29:54.279464 1480833 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 22:29:54.279512 1480833 out.go:358] Setting ErrFile to fd 2...
I0920 22:29:54.279533 1480833 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 22:29:54.280296 1480833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
I0920 22:29:54.281083 1480833 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 22:29:54.282441 1480833 config.go:182] Loaded profile config "functional-866813": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 22:29:54.283035 1480833 cli_runner.go:164] Run: docker container inspect functional-866813 --format={{.State.Status}}
I0920 22:29:54.309513 1480833 ssh_runner.go:195] Run: systemctl --version
I0920 22:29:54.309584 1480833 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-866813
I0920 22:29:54.340380 1480833 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/functional-866813/id_rsa Username:docker}
I0920 22:29:54.433474 1480833 build_images.go:161] Building image from path: /tmp/build.4179746218.tar
I0920 22:29:54.433552 1480833 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 22:29:54.445469 1480833 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4179746218.tar
I0920 22:29:54.450693 1480833 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4179746218.tar: stat -c "%s %y" /var/lib/minikube/build/build.4179746218.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4179746218.tar': No such file or directory
I0920 22:29:54.450739 1480833 ssh_runner.go:362] scp /tmp/build.4179746218.tar --> /var/lib/minikube/build/build.4179746218.tar (3072 bytes)
I0920 22:29:54.485339 1480833 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4179746218
I0920 22:29:54.497833 1480833 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4179746218 -xf /var/lib/minikube/build/build.4179746218.tar
I0920 22:29:54.507920 1480833 docker.go:360] Building image: /var/lib/minikube/build/build.4179746218
I0920 22:29:54.508015 1480833 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-866813 /var/lib/minikube/build/build.4179746218
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:094e759557ec2a0b2ce5f3cc51b46cd655b6746feb555eb1128ac4c6253fedff done
#8 naming to localhost/my-image:functional-866813 done
#8 DONE 0.1s
I0920 22:29:57.093582 1480833 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-866813 /var/lib/minikube/build/build.4179746218: (2.585514015s)
I0920 22:29:57.093672 1480833 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4179746218
I0920 22:29:57.102711 1480833 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4179746218.tar
I0920 22:29:57.111514 1480833 build_images.go:217] Built localhost/my-image:functional-866813 from /tmp/build.4179746218.tar
I0920 22:29:57.111546 1480833 build_images.go:133] succeeded building to: functional-866813
I0920 22:29:57.111552 1480833 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-866813
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image load --daemon kicbase/echo-server:functional-866813 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image load --daemon kicbase/echo-server:functional-866813 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-866813
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image load --daemon kicbase/echo-server:functional-866813 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image save kicbase/echo-server:functional-866813 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image rm kicbase/echo-server:functional-866813 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
2024/09/20 22:29:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-866813
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 image save --daemon kicbase/echo-server:functional-866813 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-866813
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-866813 docker-env) && out/minikube-linux-arm64 status -p functional-866813"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-866813 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-866813 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-866813
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-866813
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-866813
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (123.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-037582 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 22:30:05.305974 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:30:46.267938 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-037582 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m2.645632905s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (123.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- rollout status deployment/busybox
E0920 22:32:08.190306 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-037582 -- rollout status deployment/busybox: (5.067832115s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-5jjjk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-dpr76 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-vxljs -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-5jjjk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-dpr76 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-vxljs -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-5jjjk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-dpr76 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-vxljs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-5jjjk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-5jjjk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-dpr76 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-dpr76 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-vxljs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-037582 -- exec busybox-7dff88458-vxljs -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (29.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-037582 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-037582 -v=7 --alsologtostderr: (28.225965202s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-037582 status -v=7 --alsologtostderr: (1.09408333s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (29.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-037582 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.079613668s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp testdata/cp-test.txt ha-037582:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2845435697/001/cp-test_ha-037582.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582:/home/docker/cp-test.txt ha-037582-m02:/home/docker/cp-test_ha-037582_ha-037582-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m02 "sudo cat /home/docker/cp-test_ha-037582_ha-037582-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582:/home/docker/cp-test.txt ha-037582-m03:/home/docker/cp-test_ha-037582_ha-037582-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m03 "sudo cat /home/docker/cp-test_ha-037582_ha-037582-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582:/home/docker/cp-test.txt ha-037582-m04:/home/docker/cp-test_ha-037582_ha-037582-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m04 "sudo cat /home/docker/cp-test_ha-037582_ha-037582-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp testdata/cp-test.txt ha-037582-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2845435697/001/cp-test_ha-037582-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582-m02:/home/docker/cp-test.txt ha-037582:/home/docker/cp-test_ha-037582-m02_ha-037582.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582 "sudo cat /home/docker/cp-test_ha-037582-m02_ha-037582.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582-m02:/home/docker/cp-test.txt ha-037582-m03:/home/docker/cp-test_ha-037582-m02_ha-037582-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m03 "sudo cat /home/docker/cp-test_ha-037582-m02_ha-037582-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582-m02:/home/docker/cp-test.txt ha-037582-m04:/home/docker/cp-test_ha-037582-m02_ha-037582-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m04 "sudo cat /home/docker/cp-test_ha-037582-m02_ha-037582-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp testdata/cp-test.txt ha-037582-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2845435697/001/cp-test_ha-037582-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582-m03:/home/docker/cp-test.txt ha-037582:/home/docker/cp-test_ha-037582-m03_ha-037582.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582 "sudo cat /home/docker/cp-test_ha-037582-m03_ha-037582.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582-m03:/home/docker/cp-test.txt ha-037582-m02:/home/docker/cp-test_ha-037582-m03_ha-037582-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m02 "sudo cat /home/docker/cp-test_ha-037582-m03_ha-037582-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582-m03:/home/docker/cp-test.txt ha-037582-m04:/home/docker/cp-test_ha-037582-m03_ha-037582-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m04 "sudo cat /home/docker/cp-test_ha-037582-m03_ha-037582-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp testdata/cp-test.txt ha-037582-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2845435697/001/cp-test_ha-037582-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582-m04:/home/docker/cp-test.txt ha-037582:/home/docker/cp-test_ha-037582-m04_ha-037582.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582 "sudo cat /home/docker/cp-test_ha-037582-m04_ha-037582.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582-m04:/home/docker/cp-test.txt ha-037582-m02:/home/docker/cp-test_ha-037582-m04_ha-037582-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m02 "sudo cat /home/docker/cp-test_ha-037582-m04_ha-037582-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 cp ha-037582-m04:/home/docker/cp-test.txt ha-037582-m03:/home/docker/cp-test_ha-037582-m04_ha-037582-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 ssh -n ha-037582-m03 "sudo cat /home/docker/cp-test_ha-037582-m04_ha-037582-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-037582 node stop m02 -v=7 --alsologtostderr: (11.124292634s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-037582 status -v=7 --alsologtostderr: exit status 7 (797.539491ms)

                                                
                                                
-- stdout --
	ha-037582
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-037582-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-037582-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-037582-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 22:33:14.462127 1503242 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:33:14.462318 1503242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:33:14.462327 1503242 out.go:358] Setting ErrFile to fd 2...
	I0920 22:33:14.462334 1503242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:33:14.462608 1503242 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
	I0920 22:33:14.462864 1503242 out.go:352] Setting JSON to false
	I0920 22:33:14.462914 1503242 mustload.go:65] Loading cluster: ha-037582
	I0920 22:33:14.462983 1503242 notify.go:220] Checking for updates...
	I0920 22:33:14.463414 1503242 config.go:182] Loaded profile config "ha-037582": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 22:33:14.463464 1503242 status.go:174] checking status of ha-037582 ...
	I0920 22:33:14.464230 1503242 cli_runner.go:164] Run: docker container inspect ha-037582 --format={{.State.Status}}
	I0920 22:33:14.489555 1503242 status.go:364] ha-037582 host status = "Running" (err=<nil>)
	I0920 22:33:14.489579 1503242 host.go:66] Checking if "ha-037582" exists ...
	I0920 22:33:14.489893 1503242 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-037582
	I0920 22:33:14.516966 1503242 host.go:66] Checking if "ha-037582" exists ...
	I0920 22:33:14.517343 1503242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 22:33:14.517401 1503242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-037582
	I0920 22:33:14.553675 1503242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/ha-037582/id_rsa Username:docker}
	I0920 22:33:14.657981 1503242 ssh_runner.go:195] Run: systemctl --version
	I0920 22:33:14.662651 1503242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:33:14.674416 1503242 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 22:33:14.732511 1503242 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-20 22:33:14.720544954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 22:33:14.733123 1503242 kubeconfig.go:125] found "ha-037582" server: "https://192.168.49.254:8443"
	I0920 22:33:14.733158 1503242 api_server.go:166] Checking apiserver status ...
	I0920 22:33:14.733214 1503242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:33:14.751830 1503242 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2263/cgroup
	I0920 22:33:14.764662 1503242 api_server.go:182] apiserver freezer: "13:freezer:/docker/0cc0fc18f0f21435a23bb8a7ac864212ee61e6e03c93e7e5d1efde04866f2833/kubepods/burstable/podc16063d60ce0b8b192b4adf51f9a90b7/10c6ed982efcaeab77074b320862e4c181fa7672358e1d6fbec1a97239d2bb0f"
	I0920 22:33:14.764751 1503242 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0cc0fc18f0f21435a23bb8a7ac864212ee61e6e03c93e7e5d1efde04866f2833/kubepods/burstable/podc16063d60ce0b8b192b4adf51f9a90b7/10c6ed982efcaeab77074b320862e4c181fa7672358e1d6fbec1a97239d2bb0f/freezer.state
	I0920 22:33:14.774268 1503242 api_server.go:204] freezer state: "THAWED"
	I0920 22:33:14.774347 1503242 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 22:33:14.782290 1503242 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 22:33:14.782321 1503242 status.go:456] ha-037582 apiserver status = Running (err=<nil>)
	I0920 22:33:14.782333 1503242 status.go:176] ha-037582 status: &{Name:ha-037582 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 22:33:14.782351 1503242 status.go:174] checking status of ha-037582-m02 ...
	I0920 22:33:14.782658 1503242 cli_runner.go:164] Run: docker container inspect ha-037582-m02 --format={{.State.Status}}
	I0920 22:33:14.799869 1503242 status.go:364] ha-037582-m02 host status = "Stopped" (err=<nil>)
	I0920 22:33:14.799890 1503242 status.go:377] host is not running, skipping remaining checks
	I0920 22:33:14.799897 1503242 status.go:176] ha-037582-m02 status: &{Name:ha-037582-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 22:33:14.799917 1503242 status.go:174] checking status of ha-037582-m03 ...
	I0920 22:33:14.800510 1503242 cli_runner.go:164] Run: docker container inspect ha-037582-m03 --format={{.State.Status}}
	I0920 22:33:14.817927 1503242 status.go:364] ha-037582-m03 host status = "Running" (err=<nil>)
	I0920 22:33:14.817948 1503242 host.go:66] Checking if "ha-037582-m03" exists ...
	I0920 22:33:14.818260 1503242 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-037582-m03
	I0920 22:33:14.837131 1503242 host.go:66] Checking if "ha-037582-m03" exists ...
	I0920 22:33:14.837460 1503242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 22:33:14.837512 1503242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-037582-m03
	I0920 22:33:14.855484 1503242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33555 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/ha-037582-m03/id_rsa Username:docker}
	I0920 22:33:14.953930 1503242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:33:14.968180 1503242 kubeconfig.go:125] found "ha-037582" server: "https://192.168.49.254:8443"
	I0920 22:33:14.968209 1503242 api_server.go:166] Checking apiserver status ...
	I0920 22:33:14.968265 1503242 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:33:14.981453 1503242 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2172/cgroup
	I0920 22:33:14.991829 1503242 api_server.go:182] apiserver freezer: "13:freezer:/docker/396c331d544462481b5b268b8a6ca18e90e46fd34a0d92ef7c7849a5c4e0f6cd/kubepods/burstable/pod35ed9b663c1597cca3ef2cf6836af36e/c89740b5af884025635ebb45952803d88bc39a13ca862d35303af1f0b489d5bb"
	I0920 22:33:14.991903 1503242 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/396c331d544462481b5b268b8a6ca18e90e46fd34a0d92ef7c7849a5c4e0f6cd/kubepods/burstable/pod35ed9b663c1597cca3ef2cf6836af36e/c89740b5af884025635ebb45952803d88bc39a13ca862d35303af1f0b489d5bb/freezer.state
	I0920 22:33:15.019834 1503242 api_server.go:204] freezer state: "THAWED"
	I0920 22:33:15.019881 1503242 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 22:33:15.028936 1503242 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 22:33:15.028970 1503242 status.go:456] ha-037582-m03 apiserver status = Running (err=<nil>)
	I0920 22:33:15.028994 1503242 status.go:176] ha-037582-m03 status: &{Name:ha-037582-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 22:33:15.029013 1503242 status.go:174] checking status of ha-037582-m04 ...
	I0920 22:33:15.029357 1503242 cli_runner.go:164] Run: docker container inspect ha-037582-m04 --format={{.State.Status}}
	I0920 22:33:15.049180 1503242 status.go:364] ha-037582-m04 host status = "Running" (err=<nil>)
	I0920 22:33:15.049207 1503242 host.go:66] Checking if "ha-037582-m04" exists ...
	I0920 22:33:15.049639 1503242 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-037582-m04
	I0920 22:33:15.071424 1503242 host.go:66] Checking if "ha-037582-m04" exists ...
	I0920 22:33:15.071937 1503242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 22:33:15.072014 1503242 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-037582-m04
	I0920 22:33:15.098693 1503242 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33560 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/ha-037582-m04/id_rsa Username:docker}
	I0920 22:33:15.197445 1503242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:33:15.209605 1503242 status.go:176] ha-037582-m04 status: &{Name:ha-037582-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-037582 node start m02 -v=7 --alsologtostderr: (36.32356701s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-037582 status -v=7 --alsologtostderr: (1.112031299s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.017301902s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (262.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-037582 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-037582 -v=7 --alsologtostderr
E0920 22:34:01.023925 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:01.030733 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:01.042056 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:01.063403 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:01.104752 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:01.186101 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:01.347545 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:01.669015 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:02.310625 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:03.592176 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:06.154874 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:11.277290 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:21.518591 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:24.326431 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-037582 -v=7 --alsologtostderr: (34.329731196s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-037582 --wait=true -v=7 --alsologtostderr
E0920 22:34:42.000514 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:34:52.032228 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:35:22.962467 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:36:44.884906 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-037582 --wait=true -v=7 --alsologtostderr: (3m47.732082449s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-037582
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (262.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-037582 node delete m03 -v=7 --alsologtostderr: (10.38083221s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 stop -v=7 --alsologtostderr
E0920 22:39:01.021701 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-037582 stop -v=7 --alsologtostderr: (32.856703334s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-037582 status -v=7 --alsologtostderr: exit status 7 (112.90488ms)

                                                
                                                
-- stdout --
	ha-037582
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-037582-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-037582-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 22:39:01.789897 1531519 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:39:01.790127 1531519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:39:01.790139 1531519 out.go:358] Setting ErrFile to fd 2...
	I0920 22:39:01.790146 1531519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:39:01.790447 1531519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
	I0920 22:39:01.790665 1531519 out.go:352] Setting JSON to false
	I0920 22:39:01.790726 1531519 mustload.go:65] Loading cluster: ha-037582
	I0920 22:39:01.790804 1531519 notify.go:220] Checking for updates...
	I0920 22:39:01.791844 1531519 config.go:182] Loaded profile config "ha-037582": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 22:39:01.791882 1531519 status.go:174] checking status of ha-037582 ...
	I0920 22:39:01.792533 1531519 cli_runner.go:164] Run: docker container inspect ha-037582 --format={{.State.Status}}
	I0920 22:39:01.812219 1531519 status.go:364] ha-037582 host status = "Stopped" (err=<nil>)
	I0920 22:39:01.812243 1531519 status.go:377] host is not running, skipping remaining checks
	I0920 22:39:01.812250 1531519 status.go:176] ha-037582 status: &{Name:ha-037582 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 22:39:01.812283 1531519 status.go:174] checking status of ha-037582-m02 ...
	I0920 22:39:01.812643 1531519 cli_runner.go:164] Run: docker container inspect ha-037582-m02 --format={{.State.Status}}
	I0920 22:39:01.832572 1531519 status.go:364] ha-037582-m02 host status = "Stopped" (err=<nil>)
	I0920 22:39:01.832598 1531519 status.go:377] host is not running, skipping remaining checks
	I0920 22:39:01.832605 1531519 status.go:176] ha-037582-m02 status: &{Name:ha-037582-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 22:39:01.832625 1531519 status.go:174] checking status of ha-037582-m04 ...
	I0920 22:39:01.832934 1531519 cli_runner.go:164] Run: docker container inspect ha-037582-m04 --format={{.State.Status}}
	I0920 22:39:01.849842 1531519 status.go:364] ha-037582-m04 host status = "Stopped" (err=<nil>)
	I0920 22:39:01.849873 1531519 status.go:377] host is not running, skipping remaining checks
	I0920 22:39:01.849880 1531519 status.go:176] ha-037582-m04 status: &{Name:ha-037582-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (103.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-037582 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 22:39:24.326332 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:39:28.727582 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-037582 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m42.251227677s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (103.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (50.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-037582 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-037582 --control-plane -v=7 --alsologtostderr: (49.307706542s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-037582 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-037582 status -v=7 --alsologtostderr: (1.016154825s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (50.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.034364735s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (30.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-514192 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-514192 --driver=docker  --container-runtime=docker: (30.963378575s)
--- PASS: TestImageBuild/serial/Setup (30.96s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-514192
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-514192: (1.873161806s)
--- PASS: TestImageBuild/serial/NormalBuild (1.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-514192
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-514192: (1.018317612s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-514192
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-514192
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.95s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-879468 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-879468 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m17.940544798s)
--- PASS: TestJSONOutput/start/Command (77.95s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-879468 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-879468 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-879468 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-879468 --output=json --user=testUser: (10.969310264s)
--- PASS: TestJSONOutput/stop/Command (10.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-223367 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-223367 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.990763ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c1c6fafe-b54f-4e69-8d9b-e781c06fc5b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-223367] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"98997dbf-7dfd-40ad-aaeb-efb7db7b50b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"bfc64258-def1-4802-a7b2-aa67d4b80693","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"88ef878f-07ea-41c3-b082-2d1a0af17dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-1431110/kubeconfig"}}
	{"specversion":"1.0","id":"dc479c3f-48cb-48ad-9f3b-9c61762fcedf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-1431110/.minikube"}}
	{"specversion":"1.0","id":"e61f7e7a-bd13-4b8d-866f-e150bdf5b40c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c90c1f6e-44a1-473b-926a-5d16a2bca507","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0184c791-bce3-41ad-bbbb-5ac015137e04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-223367" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-223367
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.13s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-997948 --network=
E0920 22:44:01.021333 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:44:24.326343 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-997948 --network=: (33.02913168s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-997948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-997948
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-997948: (2.07706139s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.13s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-310892 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-310892 --network=bridge: (32.365481973s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-310892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-310892
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-310892: (1.980842555s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.37s)

                                                
                                    
x
+
TestKicExistingNetwork (36.13s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 22:45:05.314394 1436493 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 22:45:05.330892 1436493 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 22:45:05.331684 1436493 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 22:45:05.331725 1436493 cli_runner.go:164] Run: docker network inspect existing-network
W0920 22:45:05.348044 1436493 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 22:45:05.348113 1436493 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 22:45:05.348130 1436493 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 22:45:05.348241 1436493 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 22:45:05.366962 1436493 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d45a504a59dc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:26:93:95:2f} reservation:<nil>}
I0920 22:45:05.367842 1436493 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b93510}
I0920 22:45:05.367878 1436493 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0920 22:45:05.367933 1436493 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 22:45:05.443411 1436493 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-440980 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-440980 --network=existing-network: (33.958829283s)
helpers_test.go:175: Cleaning up "existing-network-440980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-440980
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-440980: (2.003955297s)
I0920 22:45:41.422347 1436493 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.13s)

                                                
                                    
x
+
TestKicCustomSubnet (34.11s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-383384 --subnet=192.168.60.0/24
E0920 22:45:47.396204 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-383384 --subnet=192.168.60.0/24: (31.97798275s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-383384 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-383384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-383384
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-383384: (2.107764305s)
--- PASS: TestKicCustomSubnet (34.11s)

                                                
                                    
x
+
TestKicStaticIP (33.57s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-872355 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-872355 --static-ip=192.168.200.200: (31.376433333s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-872355 ip
helpers_test.go:175: Cleaning up "static-ip-872355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-872355
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-872355: (2.043505945s)
--- PASS: TestKicStaticIP (33.57s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (67.65s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-254997 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-254997 --driver=docker  --container-runtime=docker: (30.340816141s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-257696 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-257696 --driver=docker  --container-runtime=docker: (31.683413461s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-254997
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-257696
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-257696" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-257696
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-257696: (2.106783783s)
helpers_test.go:175: Cleaning up "first-254997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-254997
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-254997: (2.164896892s)
--- PASS: TestMinikubeProfile (67.65s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-487664 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-487664 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.943430442s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-487664 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-489682 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-489682 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.470756656s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-489682 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-487664 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-487664 --alsologtostderr -v=5: (1.491502226s)
--- PASS: TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-489682 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-489682
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-489682: (1.203673505s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.53s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-489682
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-489682: (7.531279329s)
--- PASS: TestMountStart/serial/RestartStopped (8.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-489682 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (85.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994042 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 22:49:01.021610 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:49:24.325919 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-994042 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m24.544483078s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (85.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (44.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-994042 -- rollout status deployment/busybox: (4.048226382s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 22:49:55.700710 1436493 retry.go:31] will retry after 1.041889955s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 22:49:56.886548 1436493 retry.go:31] will retry after 1.286776149s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 22:49:58.323136 1436493 retry.go:31] will retry after 1.285308635s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 22:49:59.773760 1436493 retry.go:31] will retry after 4.060781479s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 22:50:03.977672 1436493 retry.go:31] will retry after 4.77811251s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 22:50:08.917728 1436493 retry.go:31] will retry after 8.878134126s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 22:50:17.940556 1436493 retry.go:31] will retry after 16.097582808s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0920 22:50:24.089730 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- exec busybox-7dff88458-4s8hm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- exec busybox-7dff88458-vppzl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- exec busybox-7dff88458-4s8hm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- exec busybox-7dff88458-vppzl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- exec busybox-7dff88458-4s8hm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- exec busybox-7dff88458-vppzl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (44.51s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- exec busybox-7dff88458-4s8hm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- exec busybox-7dff88458-4s8hm -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- exec busybox-7dff88458-vppzl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994042 -- exec busybox-7dff88458-vppzl -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.06s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-994042 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-994042 -v 3 --alsologtostderr: (17.820383533s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-994042 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 cp testdata/cp-test.txt multinode-994042:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 cp multinode-994042:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3524687144/001/cp-test_multinode-994042.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 cp multinode-994042:/home/docker/cp-test.txt multinode-994042-m02:/home/docker/cp-test_multinode-994042_multinode-994042-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042-m02 "sudo cat /home/docker/cp-test_multinode-994042_multinode-994042-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 cp multinode-994042:/home/docker/cp-test.txt multinode-994042-m03:/home/docker/cp-test_multinode-994042_multinode-994042-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042-m03 "sudo cat /home/docker/cp-test_multinode-994042_multinode-994042-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 cp testdata/cp-test.txt multinode-994042-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 cp multinode-994042-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3524687144/001/cp-test_multinode-994042-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 cp multinode-994042-m02:/home/docker/cp-test.txt multinode-994042:/home/docker/cp-test_multinode-994042-m02_multinode-994042.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042 "sudo cat /home/docker/cp-test_multinode-994042-m02_multinode-994042.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 cp multinode-994042-m02:/home/docker/cp-test.txt multinode-994042-m03:/home/docker/cp-test_multinode-994042-m02_multinode-994042-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042-m03 "sudo cat /home/docker/cp-test_multinode-994042-m02_multinode-994042-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 cp testdata/cp-test.txt multinode-994042-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 cp multinode-994042-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3524687144/001/cp-test_multinode-994042-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 cp multinode-994042-m03:/home/docker/cp-test.txt multinode-994042:/home/docker/cp-test_multinode-994042-m03_multinode-994042.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042 "sudo cat /home/docker/cp-test_multinode-994042-m03_multinode-994042.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 cp multinode-994042-m03:/home/docker/cp-test.txt multinode-994042-m02:/home/docker/cp-test_multinode-994042-m03_multinode-994042-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 ssh -n multinode-994042-m02 "sudo cat /home/docker/cp-test_multinode-994042-m03_multinode-994042-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-994042 node stop m03: (1.216346803s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-994042 status: exit status 7 (501.890447ms)

                                                
                                                
-- stdout --
	multinode-994042
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-994042-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-994042-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-994042 status --alsologtostderr: exit status 7 (504.919586ms)

                                                
                                                
-- stdout --
	multinode-994042
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-994042-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-994042-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 22:51:08.337995 1606218 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:51:08.338142 1606218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:51:08.338167 1606218 out.go:358] Setting ErrFile to fd 2...
	I0920 22:51:08.338175 1606218 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:51:08.338556 1606218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
	I0920 22:51:08.338798 1606218 out.go:352] Setting JSON to false
	I0920 22:51:08.338848 1606218 mustload.go:65] Loading cluster: multinode-994042
	I0920 22:51:08.338931 1606218 notify.go:220] Checking for updates...
	I0920 22:51:08.339305 1606218 config.go:182] Loaded profile config "multinode-994042": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 22:51:08.339322 1606218 status.go:174] checking status of multinode-994042 ...
	I0920 22:51:08.340391 1606218 cli_runner.go:164] Run: docker container inspect multinode-994042 --format={{.State.Status}}
	I0920 22:51:08.359092 1606218 status.go:364] multinode-994042 host status = "Running" (err=<nil>)
	I0920 22:51:08.359119 1606218 host.go:66] Checking if "multinode-994042" exists ...
	I0920 22:51:08.359438 1606218 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994042
	I0920 22:51:08.385221 1606218 host.go:66] Checking if "multinode-994042" exists ...
	I0920 22:51:08.385765 1606218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 22:51:08.385842 1606218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994042
	I0920 22:51:08.404511 1606218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33672 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/multinode-994042/id_rsa Username:docker}
	I0920 22:51:08.497405 1606218 ssh_runner.go:195] Run: systemctl --version
	I0920 22:51:08.502288 1606218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:51:08.514261 1606218 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 22:51:08.567736 1606218 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-20 22:51:08.557432148 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 22:51:08.568417 1606218 kubeconfig.go:125] found "multinode-994042" server: "https://192.168.67.2:8443"
	I0920 22:51:08.568454 1606218 api_server.go:166] Checking apiserver status ...
	I0920 22:51:08.568502 1606218 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:51:08.579753 1606218 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2295/cgroup
	I0920 22:51:08.589353 1606218 api_server.go:182] apiserver freezer: "13:freezer:/docker/9d93ba6fec62249cd6a84467dd6da0890ce58606f3e96ad16458850790bf0d8f/kubepods/burstable/pod28e20cc49e9b6e5c6c9bbf40c97af9c9/4cf517731cfa01ddb08c0d721bd1f9c32e35a860329428f50c64a4f5678d0aa5"
	I0920 22:51:08.589426 1606218 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9d93ba6fec62249cd6a84467dd6da0890ce58606f3e96ad16458850790bf0d8f/kubepods/burstable/pod28e20cc49e9b6e5c6c9bbf40c97af9c9/4cf517731cfa01ddb08c0d721bd1f9c32e35a860329428f50c64a4f5678d0aa5/freezer.state
	I0920 22:51:08.598246 1606218 api_server.go:204] freezer state: "THAWED"
	I0920 22:51:08.598276 1606218 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0920 22:51:08.605976 1606218 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0920 22:51:08.606004 1606218 status.go:456] multinode-994042 apiserver status = Running (err=<nil>)
	I0920 22:51:08.606015 1606218 status.go:176] multinode-994042 status: &{Name:multinode-994042 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 22:51:08.606032 1606218 status.go:174] checking status of multinode-994042-m02 ...
	I0920 22:51:08.606361 1606218 cli_runner.go:164] Run: docker container inspect multinode-994042-m02 --format={{.State.Status}}
	I0920 22:51:08.626992 1606218 status.go:364] multinode-994042-m02 host status = "Running" (err=<nil>)
	I0920 22:51:08.627016 1606218 host.go:66] Checking if "multinode-994042-m02" exists ...
	I0920 22:51:08.627314 1606218 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994042-m02
	I0920 22:51:08.644867 1606218 host.go:66] Checking if "multinode-994042-m02" exists ...
	I0920 22:51:08.645197 1606218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 22:51:08.645254 1606218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994042-m02
	I0920 22:51:08.664260 1606218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33677 SSHKeyPath:/home/jenkins/minikube-integration/19672-1431110/.minikube/machines/multinode-994042-m02/id_rsa Username:docker}
	I0920 22:51:08.757727 1606218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:51:08.771864 1606218 status.go:176] multinode-994042-m02 status: &{Name:multinode-994042-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 22:51:08.771901 1606218 status.go:174] checking status of multinode-994042-m03 ...
	I0920 22:51:08.772251 1606218 cli_runner.go:164] Run: docker container inspect multinode-994042-m03 --format={{.State.Status}}
	I0920 22:51:08.788269 1606218 status.go:364] multinode-994042-m03 host status = "Stopped" (err=<nil>)
	I0920 22:51:08.788292 1606218 status.go:377] host is not running, skipping remaining checks
	I0920 22:51:08.788299 1606218 status.go:176] multinode-994042-m03 status: &{Name:multinode-994042-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-994042 node start m03 -v=7 --alsologtostderr: (10.308824939s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (97.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-994042
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-994042
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-994042: (22.694626468s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994042 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-994042 --wait=true -v=8 --alsologtostderr: (1m14.191331665s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-994042
--- PASS: TestMultiNode/serial/RestartKeepsNodes (97.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-994042 node delete m03: (4.999736178s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-994042 stop: (21.378257434s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-994042 status: exit status 7 (97.419673ms)

                                                
                                                
-- stdout --
	multinode-994042
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-994042-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-994042 status --alsologtostderr: exit status 7 (90.973235ms)

                                                
                                                
-- stdout --
	multinode-994042
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-994042-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 22:53:24.115226 1619778 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:53:24.115377 1619778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:53:24.115388 1619778 out.go:358] Setting ErrFile to fd 2...
	I0920 22:53:24.115393 1619778 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:53:24.115659 1619778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-1431110/.minikube/bin
	I0920 22:53:24.115858 1619778 out.go:352] Setting JSON to false
	I0920 22:53:24.115896 1619778 mustload.go:65] Loading cluster: multinode-994042
	I0920 22:53:24.115994 1619778 notify.go:220] Checking for updates...
	I0920 22:53:24.116349 1619778 config.go:182] Loaded profile config "multinode-994042": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 22:53:24.116369 1619778 status.go:174] checking status of multinode-994042 ...
	I0920 22:53:24.117237 1619778 cli_runner.go:164] Run: docker container inspect multinode-994042 --format={{.State.Status}}
	I0920 22:53:24.133708 1619778 status.go:364] multinode-994042 host status = "Stopped" (err=<nil>)
	I0920 22:53:24.133732 1619778 status.go:377] host is not running, skipping remaining checks
	I0920 22:53:24.133739 1619778 status.go:176] multinode-994042 status: &{Name:multinode-994042 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 22:53:24.133784 1619778 status.go:174] checking status of multinode-994042-m02 ...
	I0920 22:53:24.134096 1619778 cli_runner.go:164] Run: docker container inspect multinode-994042-m02 --format={{.State.Status}}
	I0920 22:53:24.153721 1619778 status.go:364] multinode-994042-m02 host status = "Stopped" (err=<nil>)
	I0920 22:53:24.153741 1619778 status.go:377] host is not running, skipping remaining checks
	I0920 22:53:24.153749 1619778 status.go:176] multinode-994042-m02 status: &{Name:multinode-994042-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (62.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994042 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 22:54:01.021182 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:54:24.325922 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-994042 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m1.679719844s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994042 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (62.36s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-994042
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994042-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-994042-m02 --driver=docker  --container-runtime=docker: exit status 14 (92.228042ms)

                                                
                                                
-- stdout --
	* [multinode-994042-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-1431110/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-1431110/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-994042-m02' is duplicated with machine name 'multinode-994042-m02' in profile 'multinode-994042'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994042-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-994042-m03 --driver=docker  --container-runtime=docker: (31.812869594s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-994042
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-994042: exit status 80 (632.847333ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-994042 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-994042-m03 already exists in multinode-994042-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-994042-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-994042-m03: (2.057914514s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.65s)

                                                
                                    
x
+
TestPreload (106.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-495397 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-495397 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m2.431630663s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-495397 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-495397 image pull gcr.io/k8s-minikube/busybox: (2.153169818s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-495397
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-495397: (10.825753481s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-495397 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-495397 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (28.542458215s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-495397 image list
helpers_test.go:175: Cleaning up "test-preload-495397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-495397
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-495397: (2.191452101s)
--- PASS: TestPreload (106.37s)

                                                
                                    
x
+
TestScheduledStopUnix (104.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-702616 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-702616 --memory=2048 --driver=docker  --container-runtime=docker: (31.563038492s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-702616 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-702616 -n scheduled-stop-702616
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-702616 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 22:57:23.592318 1436493 retry.go:31] will retry after 139.873µs: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.593478 1436493 retry.go:31] will retry after 168.048µs: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.594615 1436493 retry.go:31] will retry after 160.525µs: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.595743 1436493 retry.go:31] will retry after 186.958µs: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.596841 1436493 retry.go:31] will retry after 275.218µs: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.597968 1436493 retry.go:31] will retry after 514.668µs: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.599091 1436493 retry.go:31] will retry after 665.901µs: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.600204 1436493 retry.go:31] will retry after 1.524024ms: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.602410 1436493 retry.go:31] will retry after 3.612335ms: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.607744 1436493 retry.go:31] will retry after 2.060099ms: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.609889 1436493 retry.go:31] will retry after 5.935022ms: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.616123 1436493 retry.go:31] will retry after 12.873758ms: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.629358 1436493 retry.go:31] will retry after 16.078702ms: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.645541 1436493 retry.go:31] will retry after 16.264277ms: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
I0920 22:57:23.662826 1436493 retry.go:31] will retry after 32.789195ms: open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/scheduled-stop-702616/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-702616 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-702616 -n scheduled-stop-702616
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-702616
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-702616 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-702616
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-702616: exit status 7 (68.083669ms)

                                                
                                                
-- stdout --
	scheduled-stop-702616
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-702616 -n scheduled-stop-702616
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-702616 -n scheduled-stop-702616: exit status 7 (71.242923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-702616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-702616
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-702616: (1.639203893s)
--- PASS: TestScheduledStopUnix (104.70s)

                                                
                                    
x
+
TestSkaffold (118.13s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1108173008 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-344965 --memory=2600 --driver=docker  --container-runtime=docker
E0920 22:59:01.021292 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-344965 --memory=2600 --driver=docker  --container-runtime=docker: (30.917719098s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1108173008 run --minikube-profile skaffold-344965 --kube-context skaffold-344965 --status-check=true --port-forward=false --interactive=false
E0920 22:59:24.326352 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1108173008 run --minikube-profile skaffold-344965 --kube-context skaffold-344965 --status-check=true --port-forward=false --interactive=false: (1m11.829428105s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-57c9f4fb7c-qgcgm" [9bf4f506-b3f7-4b40-897e-4baa5c36eb46] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003128311s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-545f488575-l7ctq" [702ccccc-c4ec-4fbd-a563-91ede04eb809] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00370868s
helpers_test.go:175: Cleaning up "skaffold-344965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-344965
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-344965: (3.010207077s)
--- PASS: TestSkaffold (118.13s)

                                                
                                    
x
+
TestInsufficientStorage (10.97s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-387532 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-387532 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.635565372s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b59463f0-08bc-48d9-a8f3-9a1424c7dbe5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-387532] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"74a05209-3771-4bd6-89e8-e4ef2bcfde4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"7991e9d7-a664-4aef-867c-6345ecca9699","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b5c66b40-eefe-44af-94c1-55c7a05a9cf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-1431110/kubeconfig"}}
	{"specversion":"1.0","id":"e3828ee5-2142-4727-8013-910ee6f0c235","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-1431110/.minikube"}}
	{"specversion":"1.0","id":"fb2abc50-7b6c-4770-93ff-a2c8182e9d99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a66a026a-9c01-4d51-95c2-4b1e2e358a9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"33a58714-94f7-4dc1-b1cc-796156bf6320","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"96da560e-0c64-4709-8a41-3dccc0c056ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0cee4adf-198f-4691-ba22-da28030b49b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7731a614-ea36-4d97-b506-dc889c1eca8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d40d856e-88bc-43ad-9141-d8dffe4f5fe7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-387532\" primary control-plane node in \"insufficient-storage-387532\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c680a429-d451-402a-8b1a-8641ea02f427","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7635bf8a-8fb4-4ef5-969d-65612311c344","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"203a1c40-6a42-4420-b6fc-78562dc39371","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-387532 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-387532 --output=json --layout=cluster: exit status 7 (303.781807ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-387532","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-387532","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 23:00:43.283158 1653876 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-387532" does not appear in /home/jenkins/minikube-integration/19672-1431110/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-387532 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-387532 --output=json --layout=cluster: exit status 7 (279.806697ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-387532","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-387532","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 23:00:43.563305 1653939 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-387532" does not appear in /home/jenkins/minikube-integration/19672-1431110/kubeconfig
	E0920 23:00:43.573654 1653939 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/insufficient-storage-387532/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-387532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-387532
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-387532: (1.747639415s)
--- PASS: TestInsufficientStorage (10.97s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (78.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.815102435 start -p running-upgrade-022111 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0920 23:06:42.266447 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.815102435 start -p running-upgrade-022111 --memory=2200 --vm-driver=docker  --container-runtime=docker: (36.788143739s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-022111 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 23:07:04.091351 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-022111 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.021921692s)
helpers_test.go:175: Cleaning up "running-upgrade-022111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-022111
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-022111: (2.552411849s)
--- PASS: TestRunningBinaryUpgrade (78.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (378.65s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-943720 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 23:02:27.397521 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-943720 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (58.856097533s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-943720
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-943720: (10.90829427s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-943720 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-943720 status --format={{.Host}}: exit status 7 (107.186678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-943720 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-943720 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m40.291145497s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-943720 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-943720 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-943720 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (102.885207ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-943720] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-1431110/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-1431110/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-943720
	    minikube start -p kubernetes-upgrade-943720 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9437202 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-943720 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-943720 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-943720 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.870164393s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-943720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-943720
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-943720: (2.401903898s)
--- PASS: TestKubernetesUpgrade (378.65s)

                                                
                                    
x
+
TestMissingContainerUpgrade (162.19s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2782357650 start -p missing-upgrade-824346 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2782357650 start -p missing-upgrade-824346 --memory=2200 --driver=docker  --container-runtime=docker: (1m37.545424022s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-824346
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-824346: (10.43754263s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-824346
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-824346 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 23:04:01.021629 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:04:24.326361 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-824346 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (50.884358416s)
helpers_test.go:175: Cleaning up "missing-upgrade-824346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-824346
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-824346: (2.378778343s)
--- PASS: TestMissingContainerUpgrade (162.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454295 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-454295 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (122.354294ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-454295] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-1431110/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-1431110/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454295 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-454295 --driver=docker  --container-runtime=docker: (43.187871628s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-454295 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454295 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-454295 --no-kubernetes --driver=docker  --container-runtime=docker: (16.487761939s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-454295 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-454295 status -o json: exit status 2 (307.983025ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-454295","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-454295
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-454295: (1.75197408s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454295 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-454295 --no-kubernetes --driver=docker  --container-runtime=docker: (9.596848934s)
--- PASS: TestNoKubernetes/serial/Start (9.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-454295 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-454295 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.509772ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-454295
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-454295: (1.21657788s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-454295 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-454295 --driver=docker  --container-runtime=docker: (8.089358251s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-454295 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-454295 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.569454ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (86.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1039933326 start -p stopped-upgrade-974675 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0920 23:05:20.328229 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:05:20.334547 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:05:20.345819 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:05:20.367149 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:05:20.409038 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:05:20.491052 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:05:20.652488 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:05:20.974088 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:05:21.615921 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:05:22.897287 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:05:25.458712 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:05:30.580455 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1039933326 start -p stopped-upgrade-974675 --memory=2200 --vm-driver=docker  --container-runtime=docker: (39.76010152s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1039933326 -p stopped-upgrade-974675 stop
E0920 23:05:40.821911 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1039933326 -p stopped-upgrade-974675 stop: (11.051365651s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-974675 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 23:06:01.304286 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-974675 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.084369041s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (86.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-974675
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-974675: (1.584958961s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.59s)

                                                
                                    
x
+
TestPause/serial/Start (84.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-313419 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0920 23:08:04.188221 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-313419 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m24.45318921s)
--- PASS: TestPause/serial/Start (84.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.92s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-313419 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-313419 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.905032281s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.92s)

                                                
                                    
x
+
TestPause/serial/Pause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-313419 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.76s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-313419 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-313419 --output=json --layout=cluster: exit status 2 (380.799429ms)

                                                
                                                
-- stdout --
	{"Name":"pause-313419","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-313419","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-313419 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.96s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-313419 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.96s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.39s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-313419 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-313419 --alsologtostderr -v=5: (2.390938368s)
--- PASS: TestPause/serial/DeletePaused (2.39s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (5.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (5.182160202s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-313419
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-313419: exit status 1 (13.65486ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-313419: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (5.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (140.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-247832 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0920 23:10:48.032280 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-247832 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m20.47754558s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (140.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-247832 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [37b7fcac-4752-4d52-a3e3-c2319ca64fe3] Pending
helpers_test.go:344: "busybox" [37b7fcac-4752-4d52-a3e3-c2319ca64fe3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [37b7fcac-4752-4d52-a3e3-c2319ca64fe3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.008164204s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-247832 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-247832 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-247832 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-247832 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-247832 --alsologtostderr -v=3: (10.899525384s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-247832 -n old-k8s-version-247832
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-247832 -n old-k8s-version-247832: exit status 7 (74.980487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-247832 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (145.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-247832 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0920 23:14:01.020933 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-247832 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m25.006930098s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-247832 -n old-k8s-version-247832
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (145.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-875296 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 23:15:20.325920 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-875296 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (50.350579379s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-875296 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1f76cac8-e838-4169-b728-119e9ce93376] Pending
helpers_test.go:344: "busybox" [1f76cac8-e838-4169-b728-119e9ce93376] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1f76cac8-e838-4169-b728-119e9ce93376] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004155071s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-875296 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-875296 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-875296 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.036999723s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-875296 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-875296 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-875296 --alsologtostderr -v=3: (10.971109367s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4m6gq" [9f145af3-936b-495f-8526-461acc617cc0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00356436s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-875296 -n default-k8s-diff-port-875296
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-875296 -n default-k8s-diff-port-875296: exit status 7 (75.622844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-875296 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (270.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-875296 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-875296 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m30.336415247s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-875296 -n default-k8s-diff-port-875296
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (270.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4m6gq" [9f145af3-936b-495f-8526-461acc617cc0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005810879s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-247832 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-247832 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-247832 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-247832 -n old-k8s-version-247832
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-247832 -n old-k8s-version-247832: exit status 2 (409.758045ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-247832 -n old-k8s-version-247832
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-247832 -n old-k8s-version-247832: exit status 2 (441.959731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-247832 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-247832 -n old-k8s-version-247832
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-247832 -n old-k8s-version-247832
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (48.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-460997 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-460997 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (48.67891853s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (48.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-460997 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [efe20923-2a03-4a42-bee7-39ae06a34f08] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [efe20923-2a03-4a42-bee7-39ae06a34f08] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00562256s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-460997 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-460997 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-460997 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.026761103s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-460997 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-460997 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-460997 --alsologtostderr -v=3: (10.987874358s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-460997 -n embed-certs-460997
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-460997 -n embed-certs-460997: exit status 7 (68.717453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-460997 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-460997 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 23:18:03.728544 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:18:03.734931 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:18:03.746289 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:18:03.767695 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:18:03.809324 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:18:03.890854 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:18:04.052528 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:18:04.374102 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:18:05.016205 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:18:06.298199 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:18:08.859792 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:18:13.981773 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:18:24.223855 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:18:44.705280 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:19:01.021331 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:19:07.400220 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:19:24.326163 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:19:25.666623 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:20:20.326079 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-460997 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m26.402239156s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-460997 -n embed-certs-460997
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-65cc6" [01638717-90b2-4dfb-9f5e-91e9b7157e6c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004961335s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-65cc6" [01638717-90b2-4dfb-9f5e-91e9b7157e6c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004300527s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-875296 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-875296 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-875296 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-875296 -n default-k8s-diff-port-875296
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-875296 -n default-k8s-diff-port-875296: exit status 2 (334.605914ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-875296 -n default-k8s-diff-port-875296
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-875296 -n default-k8s-diff-port-875296: exit status 2 (342.308524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-875296 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-875296 -n default-k8s-diff-port-875296
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-875296 -n default-k8s-diff-port-875296
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-337810 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 23:20:47.588245 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-337810 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (52.851065219s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-337810 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [11f31a26-6613-4a7c-80cd-c1c7772990d6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [11f31a26-6613-4a7c-80cd-c1c7772990d6] Running
E0920 23:21:43.393644 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003965006s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-337810 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-337810 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-337810 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lnntv" [3664f6ac-40a5-441f-bdb6-29cdbe9f4257] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003460853s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-337810 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-337810 --alsologtostderr -v=3: (10.903691628s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lnntv" [3664f6ac-40a5-441f-bdb6-29cdbe9f4257] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004062616s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-460997 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-460997 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-460997 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-460997 -n embed-certs-460997
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-460997 -n embed-certs-460997: exit status 2 (411.097907ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-460997 -n embed-certs-460997
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-460997 -n embed-certs-460997: exit status 2 (479.776632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-460997 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-460997 --alsologtostderr -v=1: (1.007442775s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-460997 -n embed-certs-460997
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-460997 -n embed-certs-460997
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-337810 -n no-preload-337810
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-337810 -n no-preload-337810: exit status 7 (116.450241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-337810 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (295.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-337810 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-337810 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m54.861044662s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-337810 -n no-preload-337810
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (295.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-855403 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-855403 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (44.317784614s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-855403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-855403 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.196494752s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-855403 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-855403 --alsologtostderr -v=3: (11.162688229s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-855403 -n newest-cni-855403
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-855403 -n newest-cni-855403: exit status 7 (69.391225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-855403 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-855403 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 23:23:03.728460 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-855403 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (17.200190172s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-855403 -n newest-cni-855403
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-855403 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-855403 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-855403 -n newest-cni-855403
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-855403 -n newest-cni-855403: exit status 2 (317.62544ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-855403 -n newest-cni-855403
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-855403 -n newest-cni-855403: exit status 2 (356.65505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-855403 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-855403 -n newest-cni-855403
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-855403 -n newest-cni-855403
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0920 23:23:31.429751 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:23:44.092701 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:24:01.020970 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (44.745613562s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-543313 "pgrep -a kubelet"
I0920 23:24:09.210069 1436493 config.go:182] Loaded profile config "auto-543313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-543313 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hjscn" [2ee4b2c8-1d7f-4342-9ab4-1a9add81f63f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hjscn" [2ee4b2c8-1d7f-4342-9ab4-1a9add81f63f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004207117s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-543313 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (66.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0920 23:25:20.326319 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/skaffold-344965/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:25:31.992652 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:25:31.999338 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:25:32.010827 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:25:32.032505 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:25:32.073939 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:25:32.155429 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:25:32.317604 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:25:32.639210 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:25:33.281277 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:25:34.562689 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:25:37.124197 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:25:42.246404 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m6.726833803s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (66.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gsxjl" [7aa56538-1b18-4e7c-a2f1-3d27ece9f88b] Running
E0920 23:25:52.488455 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004073938s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-543313 "pgrep -a kubelet"
I0920 23:25:53.364649 1436493 config.go:182] Loaded profile config "kindnet-543313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-543313 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5wn8l" [05bfb243-5b89-4a0e-ae2a-d24bc2823b9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5wn8l" [05bfb243-5b89-4a0e-ae2a-d24bc2823b9a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.006699559s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-543313 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m16.023775542s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9x2dl" [25d2443d-7984-4627-8d5d-8642755b0f95] Running
E0920 23:26:53.932210 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003916051s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9x2dl" [25d2443d-7984-4627-8d5d-8642755b0f95] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004649818s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-337810 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-337810 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-337810 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-337810 --alsologtostderr -v=1: (1.039179794s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-337810 -n no-preload-337810
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-337810 -n no-preload-337810: exit status 2 (441.593802ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-337810 -n no-preload-337810
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-337810 -n no-preload-337810: exit status 2 (434.387608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-337810 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-337810 -n no-preload-337810
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-337810 -n no-preload-337810
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.21s)
E0920 23:32:50.613876 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/calico-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:32:57.354900 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/no-preload-337810/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:33:00.855727 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/calico-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:33:03.728612 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/old-k8s-version-247832/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m5.004603712s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xvs65" [88abb4ef-dafe-4989-9cc6-c14533f47461] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003827223s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-543313 "pgrep -a kubelet"
I0920 23:27:46.760388 1436493 config.go:182] Loaded profile config "calico-543313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-543313 replace --force -f testdata/netcat-deployment.yaml
I0920 23:27:47.163294 1436493 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0920 23:27:47.278070 1436493 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-k7jwd" [a195156c-d6a6-4225-a34e-dcc44ad919a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-k7jwd" [a195156c-d6a6-4225-a34e-dcc44ad919a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004450027s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-543313 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-543313 "pgrep -a kubelet"
I0920 23:28:17.629053 1436493 config.go:182] Loaded profile config "custom-flannel-543313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-543313 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-57pxh" [5de2a22d-34ac-4d7b-8edd-fe567cb8bebc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-57pxh" [5de2a22d-34ac-4d7b-8edd-fe567cb8bebc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.006228768s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (81.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m21.939501846s)
--- PASS: TestNetworkPlugins/group/false/Start (81.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-543313 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0920 23:29:01.020724 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/functional-866813/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:09.481442 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:09.488311 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:09.499680 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:09.521097 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:09.562536 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:09.644665 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:09.806036 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:10.127893 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:10.769724 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:12.051963 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:14.613364 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:19.735351 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:24.326848 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/addons-860203/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:29:29.977565 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m20.044363971s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-543313 "pgrep -a kubelet"
I0920 23:29:47.648847 1436493 config.go:182] Loaded profile config "false-543313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-543313 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jc44r" [d0008677-3886-447f-b8eb-374e338c3d09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 23:29:50.459351 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/auto-543313/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-jc44r" [d0008677-3886-447f-b8eb-374e338c3d09] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004154587s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-543313 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-543313 "pgrep -a kubelet"
I0920 23:30:14.734657 1436493 config.go:182] Loaded profile config "enable-default-cni-543313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-543313 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kjgxp" [ebc71809-f3e4-4319-a4d7-9abfef5a3d82] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kjgxp" [ebc71809-f3e4-4319-a4d7-9abfef5a3d82] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005183939s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m0.258386348s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-543313 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0920 23:30:52.255749 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/kindnet-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:30:57.377550 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/kindnet-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:30:59.695709 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/default-k8s-diff-port-875296/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:31:07.619078 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/kindnet-543313/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m22.931757555s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-b4g89" [a0694583-5d22-492c-8524-8ecdd74bf6e4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.008877368s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-543313 "pgrep -a kubelet"
I0920 23:31:27.135509 1436493 config.go:182] Loaded profile config "flannel-543313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-543313 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h5xdm" [8f291581-ad61-419a-acec-9f5925ba2e2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 23:31:28.101049 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/kindnet-543313/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-h5xdm" [8f291581-ad61-419a-acec-9f5925ba2e2b] Running
E0920 23:31:35.414089 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/no-preload-337810/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:31:35.420635 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/no-preload-337810/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:31:35.432022 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/no-preload-337810/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:31:35.453747 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/no-preload-337810/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:31:35.495313 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/no-preload-337810/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:31:35.576646 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/no-preload-337810/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:31:35.737996 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/no-preload-337810/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:31:36.059561 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/no-preload-337810/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:31:36.701353 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/no-preload-337810/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:31:37.983874 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/no-preload-337810/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003944248s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-543313 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (75.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0920 23:32:09.062987 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/kindnet-543313/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-543313 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m15.395255267s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (75.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-543313 "pgrep -a kubelet"
I0920 23:32:14.481482 1436493 config.go:182] Loaded profile config "bridge-543313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-543313 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hnvtk" [6ca14f22-3403-4171-9f67-edeba95feba1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 23:32:16.392644 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/no-preload-337810/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-hnvtk" [6ca14f22-3403-4171-9f67-edeba95feba1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.006662854s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-543313 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-543313 "pgrep -a kubelet"
I0920 23:33:17.109189 1436493 config.go:182] Loaded profile config "kubenet-543313": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-543313 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dztbd" [61e5ec08-6585-414a-b9f3-fb8be5febb6d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 23:33:17.972283 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/custom-flannel-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:33:17.978750 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/custom-flannel-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:33:17.990149 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/custom-flannel-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:33:18.011694 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/custom-flannel-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:33:18.053128 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/custom-flannel-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:33:18.134675 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/custom-flannel-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:33:18.296308 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/custom-flannel-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:33:18.617813 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/custom-flannel-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:33:19.259350 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/custom-flannel-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:33:20.540892 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/custom-flannel-543313/client.crt: no such file or directory" logger="UnhandledError"
E0920 23:33:21.337193 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/calico-543313/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-dztbd" [61e5ec08-6585-414a-b9f3-fb8be5febb6d] Running
E0920 23:33:23.102266 1436493 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/custom-flannel-543313/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004004123s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-543313 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-543313 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    

Test skip (23/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-030597 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-030597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-030597
--- SKIP: TestDownloadOnlyKic (0.52s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-279922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-279922
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-543313 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-543313" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19672-1431110/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 23:08:25 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-313419
contexts:
- context:
cluster: pause-313419
extensions:
- extension:
last-update: Fri, 20 Sep 2024 23:08:25 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-313419
name: pause-313419
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-313419
user:
client-certificate: /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/pause-313419/client.crt
client-key: /home/jenkins/minikube-integration/19672-1431110/.minikube/profiles/pause-313419/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-543313

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-543313" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543313"

                                                
                                                
----------------------- debugLogs end: cilium-543313 [took: 3.784011624s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-543313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-543313
--- SKIP: TestNetworkPlugins/group/cilium (3.94s)

                                                
                                    
Copied to clipboard