Test Report: Docker_Linux_docker_arm64 18943

                    
                      a95fbdf9550db8c431fa5a4c330192118acd2cbf:2024-08-31:36027
                    
                

Test fail (1/353)

Order failed test Duration
33 TestAddons/parallel/Registry 75.12
x
+
TestAddons/parallel/Registry (75.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.67378ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-6fb4cdfc84-25jhq" [195b1392-2aad-40ff-a44b-0641056727a1] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005392212s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:345: "registry-proxy-jxqpd" [4752523a-ac3b-4bb6-8199-9fb816d49c87] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005506435s
addons_test.go:342: (dbg) Run:  kubectl --context addons-742639 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-742639 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-742639 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.103062502s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-742639 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 ip
2024/08/31 22:19:44 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 addons disable registry --alsologtostderr -v=1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:231: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:232: (dbg) Run:  docker inspect addons-742639
helpers_test.go:236: (dbg) docker inspect addons-742639:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "012ad1a192083a4cc8a5878d1e4e97caa871256355e845bbac7543fa8924185a",
	        "Created": "2024-08-31T22:06:25.867885471Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8872,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-31T22:06:26.03685003Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:eb620c1d7126103417d4dc31eb6aaaf95b0878713d0303a36cb77002c31b0deb",
	        "ResolvConfPath": "/var/lib/docker/containers/012ad1a192083a4cc8a5878d1e4e97caa871256355e845bbac7543fa8924185a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/012ad1a192083a4cc8a5878d1e4e97caa871256355e845bbac7543fa8924185a/hostname",
	        "HostsPath": "/var/lib/docker/containers/012ad1a192083a4cc8a5878d1e4e97caa871256355e845bbac7543fa8924185a/hosts",
	        "LogPath": "/var/lib/docker/containers/012ad1a192083a4cc8a5878d1e4e97caa871256355e845bbac7543fa8924185a/012ad1a192083a4cc8a5878d1e4e97caa871256355e845bbac7543fa8924185a-json.log",
	        "Name": "/addons-742639",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-742639:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-742639",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/25faade0a14dc1ce6723df9aaa93c465337f31127c83005d71dd6cb08abd9031-init/diff:/var/lib/docker/overlay2/796fe12174d90f87afbf5074bb3e18a56ed349d345a6c41023071f32d9b76cd7/diff",
	                "MergedDir": "/var/lib/docker/overlay2/25faade0a14dc1ce6723df9aaa93c465337f31127c83005d71dd6cb08abd9031/merged",
	                "UpperDir": "/var/lib/docker/overlay2/25faade0a14dc1ce6723df9aaa93c465337f31127c83005d71dd6cb08abd9031/diff",
	                "WorkDir": "/var/lib/docker/overlay2/25faade0a14dc1ce6723df9aaa93c465337f31127c83005d71dd6cb08abd9031/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-742639",
	                "Source": "/var/lib/docker/volumes/addons-742639/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-742639",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-742639",
	                "name.minikube.sigs.k8s.io": "addons-742639",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2459638da2f862f9c6f3399a90e25a1bfeae557878911554be383b0a5ee39aae",
	            "SandboxKey": "/var/run/docker/netns/2459638da2f8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-742639": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0886ad8c92aae9c81e1a82a7ff9e32d7b8816cdfd8bd33ed27c8b13cee29dccb",
	                    "EndpointID": "28df953738914d9017bd6885171fb2831d64bed8ecd21a611e00f81b42401021",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-742639",
	                        "012ad1a19208"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:240: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-742639 -n addons-742639
helpers_test.go:245: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 logs -n 25
helpers_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p addons-742639 logs -n 25: (1.328257819s)
helpers_test.go:253: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-217946   | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |                     |
	|         | -p download-only-217946              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| delete  | -p download-only-217946              | download-only-217946   | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| start   | -o=json --download-only              | download-only-613931   | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |                     |
	|         | -p download-only-613931              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| delete  | -p download-only-613931              | download-only-613931   | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| delete  | -p download-only-217946              | download-only-217946   | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| delete  | -p download-only-613931              | download-only-613931   | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| start   | --download-only -p                   | download-docker-262633 | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |                     |
	|         | download-docker-262633               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-262633            | download-docker-262633 | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:06 UTC |
	| start   | --download-only -p                   | binary-mirror-864794   | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | binary-mirror-864794                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43541               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-864794              | binary-mirror-864794   | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:06 UTC |
	| addons  | enable dashboard -p                  | addons-742639          | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | addons-742639                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-742639          | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC |                     |
	|         | addons-742639                        |                        |         |         |                     |                     |
	| start   | -p addons-742639 --wait=true         | addons-742639          | jenkins | v1.33.1 | 31 Aug 24 22:06 UTC | 31 Aug 24 22:09 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-742639 addons disable         | addons-742639          | jenkins | v1.33.1 | 31 Aug 24 22:10 UTC | 31 Aug 24 22:10 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-742639 addons disable         | addons-742639          | jenkins | v1.33.1 | 31 Aug 24 22:18 UTC | 31 Aug 24 22:18 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-742639 addons                 | addons-742639          | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-742639 addons                 | addons-742639          | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-742639          | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | -p addons-742639                     |                        |         |         |                     |                     |
	| ip      | addons-742639 ip                     | addons-742639          | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	| addons  | addons-742639 addons disable         | addons-742639          | jenkins | v1.33.1 | 31 Aug 24 22:19 UTC | 31 Aug 24 22:19 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:06:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:06:01.041802    8370 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:06:01.041983    8370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:01.042011    8370 out.go:358] Setting ErrFile to fd 2...
	I0831 22:06:01.042034    8370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:06:01.042298    8370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
	I0831 22:06:01.042798    8370 out.go:352] Setting JSON to false
	I0831 22:06:01.043668    8370 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2906,"bootTime":1725139055,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0831 22:06:01.043745    8370 start.go:139] virtualization:  
	I0831 22:06:01.045984    8370 out.go:177] * [addons-742639] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 22:06:01.048123    8370 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:06:01.048202    8370 notify.go:220] Checking for updates...
	I0831 22:06:01.051464    8370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:06:01.053013    8370 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-2279/kubeconfig
	I0831 22:06:01.054592    8370 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-2279/.minikube
	I0831 22:06:01.056171    8370 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 22:06:01.057797    8370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:06:01.059706    8370 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:06:01.089076    8370 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:06:01.089226    8370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:06:01.156517    8370 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:06:01.146408041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:06:01.156636    8370 docker.go:307] overlay module found
	I0831 22:06:01.158632    8370 out.go:177] * Using the docker driver based on user configuration
	I0831 22:06:01.160473    8370 start.go:297] selected driver: docker
	I0831 22:06:01.160504    8370 start.go:901] validating driver "docker" against <nil>
	I0831 22:06:01.160524    8370 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:06:01.161216    8370 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:06:01.218067    8370 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:06:01.208955859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:06:01.218243    8370 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:06:01.218475    8370 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:06:01.220372    8370 out.go:177] * Using Docker driver with root privileges
	I0831 22:06:01.222122    8370 cni.go:84] Creating CNI manager for ""
	I0831 22:06:01.222155    8370 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 22:06:01.222168    8370 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 22:06:01.222284    8370 start.go:340] cluster config:
	{Name:addons-742639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-742639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:06:01.225496    8370 out.go:177] * Starting "addons-742639" primary control-plane node in "addons-742639" cluster
	I0831 22:06:01.227248    8370 cache.go:121] Beginning downloading kic base image for docker with docker
	I0831 22:06:01.229116    8370 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:06:01.230617    8370 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 22:06:01.230676    8370 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18943-2279/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 22:06:01.230689    8370 cache.go:56] Caching tarball of preloaded images
	I0831 22:06:01.230701    8370 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:06:01.230772    8370 preload.go:172] Found /home/jenkins/minikube-integration/18943-2279/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0831 22:06:01.230782    8370 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 22:06:01.231125    8370 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/config.json ...
	I0831 22:06:01.231214    8370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/config.json: {Name:mk88325cfa8b0f77cbdf77c12c62a3a47cd36812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:01.247256    8370 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:06:01.247399    8370 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:06:01.247418    8370 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 22:06:01.247423    8370 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 22:06:01.247431    8370 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 22:06:01.247436    8370 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0831 22:06:18.867497    8370 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0831 22:06:18.867531    8370 cache.go:194] Successfully downloaded all kic artifacts
	I0831 22:06:18.867572    8370 start.go:360] acquireMachinesLock for addons-742639: {Name:mkad8a137166970e69c9d6a7869b3a439729bf17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0831 22:06:18.867675    8370 start.go:364] duration metric: took 86.242µs to acquireMachinesLock for "addons-742639"
	I0831 22:06:18.867698    8370 start.go:93] Provisioning new machine with config: &{Name:addons-742639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-742639 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 22:06:18.867793    8370 start.go:125] createHost starting for "" (driver="docker")
	I0831 22:06:18.869919    8370 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0831 22:06:18.870167    8370 start.go:159] libmachine.API.Create for "addons-742639" (driver="docker")
	I0831 22:06:18.870201    8370 client.go:168] LocalClient.Create starting
	I0831 22:06:18.870443    8370 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18943-2279/.minikube/certs/ca.pem
	I0831 22:06:20.112791    8370 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18943-2279/.minikube/certs/cert.pem
	I0831 22:06:20.812279    8370 cli_runner.go:164] Run: docker network inspect addons-742639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0831 22:06:20.827043    8370 cli_runner.go:211] docker network inspect addons-742639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0831 22:06:20.827121    8370 network_create.go:284] running [docker network inspect addons-742639] to gather additional debugging logs...
	I0831 22:06:20.827158    8370 cli_runner.go:164] Run: docker network inspect addons-742639
	W0831 22:06:20.842323    8370 cli_runner.go:211] docker network inspect addons-742639 returned with exit code 1
	I0831 22:06:20.842353    8370 network_create.go:287] error running [docker network inspect addons-742639]: docker network inspect addons-742639: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-742639 not found
	I0831 22:06:20.842366    8370 network_create.go:289] output of [docker network inspect addons-742639]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-742639 not found
	
	** /stderr **
	I0831 22:06:20.842466    8370 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 22:06:20.857863    8370 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000485d50}
	I0831 22:06:20.857908    8370 network_create.go:124] attempt to create docker network addons-742639 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0831 22:06:20.857965    8370 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-742639 addons-742639
	I0831 22:06:20.929386    8370 network_create.go:108] docker network addons-742639 192.168.49.0/24 created
	I0831 22:06:20.929419    8370 kic.go:121] calculated static IP "192.168.49.2" for the "addons-742639" container
	I0831 22:06:20.929492    8370 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0831 22:06:20.944259    8370 cli_runner.go:164] Run: docker volume create addons-742639 --label name.minikube.sigs.k8s.io=addons-742639 --label created_by.minikube.sigs.k8s.io=true
	I0831 22:06:20.960831    8370 oci.go:103] Successfully created a docker volume addons-742639
	I0831 22:06:20.960927    8370 cli_runner.go:164] Run: docker run --rm --name addons-742639-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-742639 --entrypoint /usr/bin/test -v addons-742639:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib
	I0831 22:06:22.052452    8370 cli_runner.go:217] Completed: docker run --rm --name addons-742639-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-742639 --entrypoint /usr/bin/test -v addons-742639:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib: (1.091488259s)
	I0831 22:06:22.052483    8370 oci.go:107] Successfully prepared a docker volume addons-742639
	I0831 22:06:22.052509    8370 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 22:06:22.052528    8370 kic.go:194] Starting extracting preloaded images to volume ...
	I0831 22:06:22.052612    8370 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-2279/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-742639:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0831 22:06:25.799237    8370 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-2279/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-742639:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.746575301s)
	I0831 22:06:25.799269    8370 kic.go:203] duration metric: took 3.746737587s to extract preloaded images to volume ...
	W0831 22:06:25.799425    8370 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0831 22:06:25.799538    8370 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0831 22:06:25.853758    8370 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-742639 --name addons-742639 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-742639 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-742639 --network addons-742639 --ip 192.168.49.2 --volume addons-742639:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0
	I0831 22:06:26.201241    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Running}}
	I0831 22:06:26.228355    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:06:26.253751    8370 cli_runner.go:164] Run: docker exec addons-742639 stat /var/lib/dpkg/alternatives/iptables
	I0831 22:06:26.326702    8370 oci.go:144] the created container "addons-742639" has a running status.
	I0831 22:06:26.326726    8370 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa...
	I0831 22:06:26.615611    8370 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0831 22:06:26.648813    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:06:26.674041    8370 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0831 22:06:26.674069    8370 kic_runner.go:114] Args: [docker exec --privileged addons-742639 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0831 22:06:26.764712    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:06:26.790441    8370 machine.go:93] provisionDockerMachine start ...
	I0831 22:06:26.790538    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:06:26.813167    8370 main.go:141] libmachine: Using SSH client type: native
	I0831 22:06:26.813438    8370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0831 22:06:26.813452    8370 main.go:141] libmachine: About to run SSH command:
	hostname
	I0831 22:06:26.814112    8370 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51774->127.0.0.1:32768: read: connection reset by peer
	I0831 22:06:29.946538    8370 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-742639
	
	I0831 22:06:29.946559    8370 ubuntu.go:169] provisioning hostname "addons-742639"
	I0831 22:06:29.946633    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:06:29.964917    8370 main.go:141] libmachine: Using SSH client type: native
	I0831 22:06:29.965153    8370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0831 22:06:29.965168    8370 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-742639 && echo "addons-742639" | sudo tee /etc/hostname
	I0831 22:06:30.119731    8370 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-742639
	
	I0831 22:06:30.119986    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:06:30.140794    8370 main.go:141] libmachine: Using SSH client type: native
	I0831 22:06:30.141067    8370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0831 22:06:30.141096    8370 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-742639' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-742639/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-742639' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0831 22:06:30.280218    8370 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0831 22:06:30.280291    8370 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-2279/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-2279/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-2279/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-2279/.minikube}
	I0831 22:06:30.280324    8370 ubuntu.go:177] setting up certificates
	I0831 22:06:30.280364    8370 provision.go:84] configureAuth start
	I0831 22:06:30.280479    8370 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-742639")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-742639
	I0831 22:06:30.300463    8370 provision.go:143] copyHostCerts
	I0831 22:06:30.300548    8370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-2279/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-2279/.minikube/cert.pem (1123 bytes)
	I0831 22:06:30.300676    8370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-2279/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-2279/.minikube/key.pem (1679 bytes)
	I0831 22:06:30.300755    8370 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-2279/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-2279/.minikube/ca.pem (1078 bytes)
	I0831 22:06:30.300809    8370 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-2279/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-2279/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-2279/.minikube/certs/ca-key.pem org=jenkins.addons-742639 san=[127.0.0.1 192.168.49.2 addons-742639 localhost minikube]
	I0831 22:06:30.803612    8370 provision.go:177] copyRemoteCerts
	I0831 22:06:30.803696    8370 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0831 22:06:30.803738    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:06:30.823128    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:06:30.919874    8370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-2279/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0831 22:06:30.944159    8370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-2279/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0831 22:06:30.968459    8370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-2279/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0831 22:06:30.992001    8370 provision.go:87] duration metric: took 711.604185ms to configureAuth
	I0831 22:06:30.992032    8370 ubuntu.go:193] setting minikube options for container-runtime
	I0831 22:06:30.992217    8370 config.go:182] Loaded profile config "addons-742639": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:06:30.992278    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:06:31.008866    8370 main.go:141] libmachine: Using SSH client type: native
	I0831 22:06:31.009115    8370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0831 22:06:31.009129    8370 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0831 22:06:31.143485    8370 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0831 22:06:31.143546    8370 ubuntu.go:71] root file system type: overlay
	I0831 22:06:31.143670    8370 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0831 22:06:31.143740    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:06:31.162556    8370 main.go:141] libmachine: Using SSH client type: native
	I0831 22:06:31.162913    8370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0831 22:06:31.163011    8370 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0831 22:06:31.310819    8370 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0831 22:06:31.310924    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:06:31.330351    8370 main.go:141] libmachine: Using SSH client type: native
	I0831 22:06:31.330587    8370 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0831 22:06:31.330611    8370 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0831 22:06:32.071701    8370 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-08-27 14:13:43.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-08-31 22:06:31.306024315 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0831 22:06:32.071739    8370 machine.go:96] duration metric: took 5.281276043s to provisionDockerMachine
	I0831 22:06:32.071750    8370 client.go:171] duration metric: took 13.201544147s to LocalClient.Create
	I0831 22:06:32.071763    8370 start.go:167] duration metric: took 13.201596372s to libmachine.API.Create "addons-742639"
	I0831 22:06:32.071779    8370 start.go:293] postStartSetup for "addons-742639" (driver="docker")
	I0831 22:06:32.071790    8370 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0831 22:06:32.071865    8370 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0831 22:06:32.071912    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:06:32.089165    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:06:32.184189    8370 ssh_runner.go:195] Run: cat /etc/os-release
	I0831 22:06:32.187280    8370 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0831 22:06:32.187356    8370 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0831 22:06:32.187374    8370 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0831 22:06:32.187381    8370 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0831 22:06:32.187391    8370 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-2279/.minikube/addons for local assets ...
	I0831 22:06:32.187471    8370 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-2279/.minikube/files for local assets ...
	I0831 22:06:32.187497    8370 start.go:296] duration metric: took 115.71261ms for postStartSetup
	I0831 22:06:32.187811    8370 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-742639")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-742639
	I0831 22:06:32.204164    8370 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/config.json ...
	I0831 22:06:32.204448    8370 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:06:32.204502    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:06:32.220573    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:06:32.311681    8370 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0831 22:06:32.316212    8370 start.go:128] duration metric: took 13.448402697s to createHost
	I0831 22:06:32.316234    8370 start.go:83] releasing machines lock for "addons-742639", held for 13.448550223s
	I0831 22:06:32.316307    8370 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "addons-742639")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-742639
	I0831 22:06:32.333993    8370 ssh_runner.go:195] Run: cat /version.json
	I0831 22:06:32.334044    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:06:32.334313    8370 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0831 22:06:32.334347    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:06:32.351583    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:06:32.353942    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:06:32.447308    8370 ssh_runner.go:195] Run: systemctl --version
	I0831 22:06:32.569814    8370 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0831 22:06:32.574022    8370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0831 22:06:32.599208    8370 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0831 22:06:32.599330    8370 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0831 22:06:32.627512    8370 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0831 22:06:32.627536    8370 start.go:495] detecting cgroup driver to use...
	I0831 22:06:32.627586    8370 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 22:06:32.627699    8370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:06:32.643950    8370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0831 22:06:32.653665    8370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0831 22:06:32.663232    8370 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0831 22:06:32.663336    8370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0831 22:06:32.672777    8370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 22:06:32.682311    8370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0831 22:06:32.691660    8370 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0831 22:06:32.701324    8370 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0831 22:06:32.710979    8370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0831 22:06:32.721771    8370 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0831 22:06:32.731820    8370 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0831 22:06:32.741614    8370 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0831 22:06:32.750121    8370 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0831 22:06:32.758737    8370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:06:32.843606    8370 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0831 22:06:32.952506    8370 start.go:495] detecting cgroup driver to use...
	I0831 22:06:32.952551    8370 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0831 22:06:32.952599    8370 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0831 22:06:32.966671    8370 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0831 22:06:32.966746    8370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0831 22:06:32.979881    8370 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0831 22:06:32.999212    8370 ssh_runner.go:195] Run: which cri-dockerd
	I0831 22:06:33.003322    8370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0831 22:06:33.022255    8370 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0831 22:06:33.045162    8370 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0831 22:06:33.155585    8370 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0831 22:06:33.262625    8370 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0831 22:06:33.262753    8370 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0831 22:06:33.283273    8370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:06:33.362742    8370 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0831 22:06:33.623055    8370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0831 22:06:33.634930    8370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 22:06:33.647202    8370 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0831 22:06:33.733481    8370 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0831 22:06:33.816240    8370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:06:33.896188    8370 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0831 22:06:33.910300    8370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0831 22:06:33.924092    8370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:06:34.011606    8370 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0831 22:06:34.086968    8370 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0831 22:06:34.087126    8370 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0831 22:06:34.090776    8370 start.go:563] Will wait 60s for crictl version
	I0831 22:06:34.090889    8370 ssh_runner.go:195] Run: which crictl
	I0831 22:06:34.094592    8370 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0831 22:06:34.133805    8370 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0831 22:06:34.133920    8370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 22:06:34.155216    8370 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0831 22:06:34.180090    8370 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0831 22:06:34.180218    8370 cli_runner.go:164] Run: docker network inspect addons-742639 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0831 22:06:34.195511    8370 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0831 22:06:34.199023    8370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:06:34.209885    8370 kubeadm.go:883] updating cluster {Name:addons-742639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-742639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0831 22:06:34.210004    8370 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 22:06:34.210064    8370 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 22:06:34.227967    8370 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0831 22:06:34.227989    8370 docker.go:615] Images already preloaded, skipping extraction
	I0831 22:06:34.228052    8370 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0831 22:06:34.246202    8370 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0831 22:06:34.246223    8370 cache_images.go:84] Images are preloaded, skipping loading
	I0831 22:06:34.246248    8370 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
	I0831 22:06:34.246359    8370 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-742639 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-742639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0831 22:06:34.246425    8370 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0831 22:06:34.291472    8370 cni.go:84] Creating CNI manager for ""
	I0831 22:06:34.291498    8370 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 22:06:34.291508    8370 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0831 22:06:34.291529    8370 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-742639 NodeName:addons-742639 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0831 22:06:34.291677    8370 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-742639"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0831 22:06:34.291741    8370 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0831 22:06:34.300685    8370 binaries.go:44] Found k8s binaries, skipping transfer
	I0831 22:06:34.300749    8370 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0831 22:06:34.309137    8370 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0831 22:06:34.326796    8370 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0831 22:06:34.344627    8370 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0831 22:06:34.362644    8370 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0831 22:06:34.366241    8370 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0831 22:06:34.376600    8370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:06:34.466077    8370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:06:34.480418    8370 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639 for IP: 192.168.49.2
	I0831 22:06:34.480443    8370 certs.go:194] generating shared ca certs ...
	I0831 22:06:34.480460    8370 certs.go:226] acquiring lock for ca certs: {Name:mk6213cea71ca52ced121955064eca25771aaa95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:34.480585    8370 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-2279/.minikube/ca.key
	I0831 22:06:35.634371    8370 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-2279/.minikube/ca.crt ...
	I0831 22:06:35.634407    8370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/.minikube/ca.crt: {Name:mkeecb17c435213f331809716b6c8bb4a252a36c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:35.634600    8370 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-2279/.minikube/ca.key ...
	I0831 22:06:35.634612    8370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/.minikube/ca.key: {Name:mkbd2b515fceec315a23d42e6db6f34e66114956 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:35.634699    8370 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-2279/.minikube/proxy-client-ca.key
	I0831 22:06:36.497357    8370 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-2279/.minikube/proxy-client-ca.crt ...
	I0831 22:06:36.497390    8370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/.minikube/proxy-client-ca.crt: {Name:mke4c0b8cfee848b6e5b88f4760e4d52ce6a2c5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:36.497586    8370 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-2279/.minikube/proxy-client-ca.key ...
	I0831 22:06:36.497612    8370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/.minikube/proxy-client-ca.key: {Name:mk879069e3be942706aadfd9e2d2cfd233da4570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:36.497696    8370 certs.go:256] generating profile certs ...
	I0831 22:06:36.497756    8370 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.key
	I0831 22:06:36.497773    8370 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt with IP's: []
	I0831 22:06:37.884345    8370 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt ...
	I0831 22:06:37.884379    8370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: {Name:mkb43cb18e972c7fa5478d5d014db558bb3be13a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:37.884562    8370 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.key ...
	I0831 22:06:37.884572    8370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.key: {Name:mk3d31232ca511d601e14be6abf6e405df837be7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:37.884683    8370 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/apiserver.key.ec62e152
	I0831 22:06:37.884708    8370 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/apiserver.crt.ec62e152 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0831 22:06:38.382421    8370 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/apiserver.crt.ec62e152 ...
	I0831 22:06:38.382454    8370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/apiserver.crt.ec62e152: {Name:mk11e404220ddbe623157499dbc8eed4542faf45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:38.382621    8370 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/apiserver.key.ec62e152 ...
	I0831 22:06:38.382639    8370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/apiserver.key.ec62e152: {Name:mkef40c44b20f4d583303da422f904cc1b891dcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:38.382708    8370 certs.go:381] copying /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/apiserver.crt.ec62e152 -> /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/apiserver.crt
	I0831 22:06:38.382795    8370 certs.go:385] copying /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/apiserver.key.ec62e152 -> /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/apiserver.key
	I0831 22:06:38.382851    8370 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/proxy-client.key
	I0831 22:06:38.382872    8370 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/proxy-client.crt with IP's: []
	I0831 22:06:38.880119    8370 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/proxy-client.crt ...
	I0831 22:06:38.880151    8370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/proxy-client.crt: {Name:mk76d04ab54cdd1251de70e21424008eabb3e8fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:38.880328    8370 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/proxy-client.key ...
	I0831 22:06:38.880339    8370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/proxy-client.key: {Name:mkb7062656e7882e5db7ff628e805d02174a3838 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:06:38.880528    8370 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-2279/.minikube/certs/ca-key.pem (1675 bytes)
	I0831 22:06:38.880571    8370 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-2279/.minikube/certs/ca.pem (1078 bytes)
	I0831 22:06:38.880602    8370 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-2279/.minikube/certs/cert.pem (1123 bytes)
	I0831 22:06:38.880630    8370 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-2279/.minikube/certs/key.pem (1679 bytes)
	I0831 22:06:38.881241    8370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-2279/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0831 22:06:38.909082    8370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-2279/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0831 22:06:38.935826    8370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-2279/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0831 22:06:38.960103    8370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-2279/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0831 22:06:38.983081    8370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0831 22:06:39.006400    8370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0831 22:06:39.036386    8370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0831 22:06:39.064320    8370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0831 22:06:39.095644    8370 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-2279/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0831 22:06:39.123288    8370 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0831 22:06:39.143607    8370 ssh_runner.go:195] Run: openssl version
	I0831 22:06:39.149152    8370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0831 22:06:39.158978    8370 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:06:39.162617    8370 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 31 22:06 /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:06:39.162679    8370 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0831 22:06:39.169740    8370 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0831 22:06:39.179317    8370 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0831 22:06:39.182642    8370 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0831 22:06:39.182696    8370 kubeadm.go:392] StartCluster: {Name:addons-742639 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-742639 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:06:39.182831    8370 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0831 22:06:39.199420    8370 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0831 22:06:39.208465    8370 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0831 22:06:39.217540    8370 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0831 22:06:39.217623    8370 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0831 22:06:39.226442    8370 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0831 22:06:39.226502    8370 kubeadm.go:157] found existing configuration files:
	
	I0831 22:06:39.226561    8370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0831 22:06:39.235279    8370 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0831 22:06:39.235344    8370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0831 22:06:39.243880    8370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0831 22:06:39.252518    8370 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0831 22:06:39.252599    8370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0831 22:06:39.261081    8370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0831 22:06:39.269528    8370 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0831 22:06:39.269591    8370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0831 22:06:39.278379    8370 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0831 22:06:39.287071    8370 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0831 22:06:39.287152    8370 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0831 22:06:39.295428    8370 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0831 22:06:39.335191    8370 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0831 22:06:39.335346    8370 kubeadm.go:310] [preflight] Running pre-flight checks
	I0831 22:06:39.360699    8370 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0831 22:06:39.360869    8370 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0831 22:06:39.360925    8370 kubeadm.go:310] OS: Linux
	I0831 22:06:39.360996    8370 kubeadm.go:310] CGROUPS_CPU: enabled
	I0831 22:06:39.361063    8370 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0831 22:06:39.361134    8370 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0831 22:06:39.361197    8370 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0831 22:06:39.361273    8370 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0831 22:06:39.361337    8370 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0831 22:06:39.361405    8370 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0831 22:06:39.361469    8370 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0831 22:06:39.361549    8370 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0831 22:06:39.427711    8370 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0831 22:06:39.427822    8370 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0831 22:06:39.427931    8370 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0831 22:06:39.439926    8370 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0831 22:06:39.443747    8370 out.go:235]   - Generating certificates and keys ...
	I0831 22:06:39.443946    8370 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0831 22:06:39.444037    8370 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0831 22:06:40.143494    8370 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0831 22:06:41.494409    8370 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0831 22:06:42.273386    8370 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0831 22:06:42.606658    8370 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0831 22:06:43.107545    8370 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0831 22:06:43.107848    8370 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-742639 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0831 22:06:43.500977    8370 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0831 22:06:43.501305    8370 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-742639 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0831 22:06:44.110971    8370 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0831 22:06:44.343122    8370 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0831 22:06:45.136332    8370 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0831 22:06:45.136404    8370 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0831 22:06:45.943050    8370 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0831 22:06:46.924431    8370 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0831 22:06:47.265010    8370 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0831 22:06:47.638536    8370 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0831 22:06:48.002779    8370 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0831 22:06:48.003582    8370 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0831 22:06:48.006828    8370 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0831 22:06:48.009202    8370 out.go:235]   - Booting up control plane ...
	I0831 22:06:48.009303    8370 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0831 22:06:48.009386    8370 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0831 22:06:48.009861    8370 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0831 22:06:48.036425    8370 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0831 22:06:48.046940    8370 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0831 22:06:48.047009    8370 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0831 22:06:48.157467    8370 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0831 22:06:48.157585    8370 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0831 22:06:49.659363    8370 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501715138s
	I0831 22:06:49.659455    8370 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0831 22:06:55.660684    8370 kubeadm.go:310] [api-check] The API server is healthy after 6.001570987s
	I0831 22:06:55.680433    8370 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0831 22:06:55.694536    8370 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0831 22:06:55.718284    8370 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0831 22:06:55.718478    8370 kubeadm.go:310] [mark-control-plane] Marking the node addons-742639 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0831 22:06:55.730706    8370 kubeadm.go:310] [bootstrap-token] Using token: grjk8s.fuafkjyeuvu2lwsx
	I0831 22:06:55.732320    8370 out.go:235]   - Configuring RBAC rules ...
	I0831 22:06:55.732443    8370 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0831 22:06:55.737317    8370 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0831 22:06:55.746190    8370 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0831 22:06:55.750196    8370 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0831 22:06:55.753851    8370 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0831 22:06:55.757368    8370 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0831 22:06:56.067225    8370 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0831 22:06:56.492849    8370 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0831 22:06:57.067432    8370 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0831 22:06:57.068728    8370 kubeadm.go:310] 
	I0831 22:06:57.068802    8370 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0831 22:06:57.068812    8370 kubeadm.go:310] 
	I0831 22:06:57.068887    8370 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0831 22:06:57.068895    8370 kubeadm.go:310] 
	I0831 22:06:57.068920    8370 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0831 22:06:57.068986    8370 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0831 22:06:57.069038    8370 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0831 22:06:57.069046    8370 kubeadm.go:310] 
	I0831 22:06:57.069098    8370 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0831 22:06:57.069105    8370 kubeadm.go:310] 
	I0831 22:06:57.069151    8370 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0831 22:06:57.069159    8370 kubeadm.go:310] 
	I0831 22:06:57.069208    8370 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0831 22:06:57.069286    8370 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0831 22:06:57.069354    8370 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0831 22:06:57.069359    8370 kubeadm.go:310] 
	I0831 22:06:57.069454    8370 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0831 22:06:57.069528    8370 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0831 22:06:57.069532    8370 kubeadm.go:310] 
	I0831 22:06:57.069613    8370 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token grjk8s.fuafkjyeuvu2lwsx \
	I0831 22:06:57.069713    8370 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d1b8075b22a4a194c2bfc0834ee68db33c03c1cb4c23257c92fbb5297bf0ac6e \
	I0831 22:06:57.069736    8370 kubeadm.go:310] 	--control-plane 
	I0831 22:06:57.069740    8370 kubeadm.go:310] 
	I0831 22:06:57.069822    8370 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0831 22:06:57.069827    8370 kubeadm.go:310] 
	I0831 22:06:57.069905    8370 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token grjk8s.fuafkjyeuvu2lwsx \
	I0831 22:06:57.070004    8370 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:d1b8075b22a4a194c2bfc0834ee68db33c03c1cb4c23257c92fbb5297bf0ac6e 
	I0831 22:06:57.073821    8370 kubeadm.go:310] W0831 22:06:39.332079    1830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:06:57.074217    8370 kubeadm.go:310] W0831 22:06:39.332916    1830 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0831 22:06:57.074459    8370 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0831 22:06:57.074591    8370 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0831 22:06:57.074632    8370 cni.go:84] Creating CNI manager for ""
	I0831 22:06:57.074658    8370 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 22:06:57.077022    8370 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0831 22:06:57.078706    8370 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0831 22:06:57.088295    8370 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0831 22:06:57.109160    8370 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0831 22:06:57.109230    8370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:57.109274    8370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-742639 minikube.k8s.io/updated_at=2024_08_31T22_06_57_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2 minikube.k8s.io/name=addons-742639 minikube.k8s.io/primary=true
	I0831 22:06:57.254916    8370 ops.go:34] apiserver oom_adj: -16
	I0831 22:06:57.255033    8370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:57.755702    8370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:58.255292    8370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:58.755179    8370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:59.255978    8370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:06:59.755339    8370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:00.260228    8370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:00.756076    8370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:01.255534    8370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:01.755855    8370 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0831 22:07:01.850898    8370 kubeadm.go:1113] duration metric: took 4.741733853s to wait for elevateKubeSystemPrivileges
	I0831 22:07:01.850928    8370 kubeadm.go:394] duration metric: took 22.668234575s to StartCluster
	I0831 22:07:01.850945    8370 settings.go:142] acquiring lock: {Name:mkd755b66299b6a3720b1696cfe6da7dd50820c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:01.851056    8370 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-2279/kubeconfig
	I0831 22:07:01.851523    8370 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/kubeconfig: {Name:mk851d81432f4f19f35bb24c1bfd11797a99091b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:07:01.851714    8370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0831 22:07:01.851740    8370 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0831 22:07:01.851984    8370 config.go:182] Loaded profile config "addons-742639": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:07:01.852044    8370 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0831 22:07:01.852120    8370 addons.go:69] Setting yakd=true in profile "addons-742639"
	I0831 22:07:01.852140    8370 addons.go:234] Setting addon yakd=true in "addons-742639"
	I0831 22:07:01.852163    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:01.852620    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.853085    8370 addons.go:69] Setting metrics-server=true in profile "addons-742639"
	I0831 22:07:01.853110    8370 addons.go:234] Setting addon metrics-server=true in "addons-742639"
	I0831 22:07:01.853137    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:01.853533    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.853699    8370 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-742639"
	I0831 22:07:01.853737    8370 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-742639"
	I0831 22:07:01.853765    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:01.854166    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.856362    8370 addons.go:69] Setting registry=true in profile "addons-742639"
	I0831 22:07:01.856405    8370 addons.go:234] Setting addon registry=true in "addons-742639"
	I0831 22:07:01.856445    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:01.856868    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.858203    8370 addons.go:69] Setting storage-provisioner=true in profile "addons-742639"
	I0831 22:07:01.858246    8370 addons.go:234] Setting addon storage-provisioner=true in "addons-742639"
	I0831 22:07:01.858282    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:01.858705    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.858837    8370 addons.go:69] Setting cloud-spanner=true in profile "addons-742639"
	I0831 22:07:01.858895    8370 addons.go:234] Setting addon cloud-spanner=true in "addons-742639"
	I0831 22:07:01.858937    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:01.859629    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.859107    8370 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-742639"
	I0831 22:07:01.866353    8370 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-742639"
	I0831 22:07:01.866413    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:01.866878    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.861531    8370 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-742639"
	I0831 22:07:01.869812    8370 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-742639"
	I0831 22:07:01.870117    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.859116    8370 addons.go:69] Setting default-storageclass=true in profile "addons-742639"
	I0831 22:07:01.878694    8370 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-742639"
	I0831 22:07:01.879019    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.859121    8370 addons.go:69] Setting gcp-auth=true in profile "addons-742639"
	I0831 22:07:01.882768    8370 mustload.go:65] Loading cluster: addons-742639
	I0831 22:07:01.882946    8370 config.go:182] Loaded profile config "addons-742639": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:07:01.883228    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.859125    8370 addons.go:69] Setting ingress=true in profile "addons-742639"
	I0831 22:07:01.887053    8370 addons.go:234] Setting addon ingress=true in "addons-742639"
	I0831 22:07:01.887114    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:01.887683    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.859128    8370 addons.go:69] Setting ingress-dns=true in profile "addons-742639"
	I0831 22:07:01.895184    8370 addons.go:234] Setting addon ingress-dns=true in "addons-742639"
	I0831 22:07:01.895231    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:01.895668    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.859203    8370 out.go:177] * Verifying Kubernetes components...
	I0831 22:07:01.859131    8370 addons.go:69] Setting inspektor-gadget=true in profile "addons-742639"
	I0831 22:07:01.907408    8370 addons.go:234] Setting addon inspektor-gadget=true in "addons-742639"
	I0831 22:07:01.907451    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:01.907896    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.909547    8370 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0831 22:07:01.861542    8370 addons.go:69] Setting volcano=true in profile "addons-742639"
	I0831 22:07:01.909741    8370 addons.go:234] Setting addon volcano=true in "addons-742639"
	I0831 22:07:01.909801    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:01.910357    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.861551    8370 addons.go:69] Setting volumesnapshots=true in profile "addons-742639"
	I0831 22:07:01.918298    8370 addons.go:234] Setting addon volumesnapshots=true in "addons-742639"
	I0831 22:07:01.918340    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:01.918771    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:01.990962    8370 out.go:177]   - Using image docker.io/registry:2.8.3
	I0831 22:07:02.003402    8370 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0831 22:07:02.004155    8370 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0831 22:07:02.022265    8370 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:07:02.022718    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0831 22:07:02.022814    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.022285    8370 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0831 22:07:02.022290    8370 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0831 22:07:02.030155    8370 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-742639"
	I0831 22:07:02.030197    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:02.030621    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:02.033420    8370 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0831 22:07:02.035211    8370 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0831 22:07:02.035233    8370 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0831 22:07:02.035300    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.042102    8370 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0831 22:07:02.042127    8370 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0831 22:07:02.042190    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.058933    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:02.060095    8370 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0831 22:07:02.060651    8370 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:07:02.060667    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0831 22:07:02.060885    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.066471    8370 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0831 22:07:02.070243    8370 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0831 22:07:02.072869    8370 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0831 22:07:02.074714    8370 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0831 22:07:02.079574    8370 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0831 22:07:02.079599    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0831 22:07:02.079663    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.091848    8370 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0831 22:07:02.093483    8370 addons.go:234] Setting addon default-storageclass=true in "addons-742639"
	I0831 22:07:02.093519    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:02.094011    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:02.102838    8370 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0831 22:07:02.102859    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0831 22:07:02.102932    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.136923    8370 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0831 22:07:02.139125    8370 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0831 22:07:02.142263    8370 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0831 22:07:02.142351    8370 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0831 22:07:02.144151    8370 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0831 22:07:02.144173    8370 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0831 22:07:02.144271    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.152169    8370 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0831 22:07:02.155313    8370 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0831 22:07:02.158495    8370 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0831 22:07:02.158518    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0831 22:07:02.158585    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.166202    8370 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0831 22:07:02.167752    8370 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0831 22:07:02.171791    8370 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0831 22:07:02.171939    8370 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0831 22:07:02.171964    8370 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0831 22:07:02.172079    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.192410    8370 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0831 22:07:02.192438    8370 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0831 22:07:02.192505    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.214406    8370 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:07:02.214429    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0831 22:07:02.214489    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.254158    8370 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:02.255368    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:02.258354    8370 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0831 22:07:02.259060    8370 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0831 22:07:02.264124    8370 out.go:177]   - Using image docker.io/busybox:stable
	I0831 22:07:02.266115    8370 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:07:02.266148    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0831 22:07:02.266234    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.269197    8370 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:02.292121    8370 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:07:02.292192    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0831 22:07:02.292283    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.326896    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:02.361464    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:02.362436    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:02.375442    8370 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0831 22:07:02.375472    8370 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0831 22:07:02.375539    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:02.377498    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:02.381438    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:02.399527    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:02.400585    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:02.409656    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:02.435008    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:02.442070    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:02.452656    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:02.463920    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:02.471321    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	W0831 22:07:02.473791    8370 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0831 22:07:02.473826    8370 retry.go:31] will retry after 165.60993ms: ssh: handshake failed: EOF
	I0831 22:07:02.958163    8370 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.048582991s)
	I0831 22:07:02.958226    8370 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0831 22:07:02.958277    8370 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.106545258s)
	I0831 22:07:02.958411    8370 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0831 22:07:03.229088    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0831 22:07:03.235104    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0831 22:07:03.295573    8370 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0831 22:07:03.295599    8370 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0831 22:07:03.314613    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0831 22:07:03.340986    8370 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0831 22:07:03.341011    8370 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0831 22:07:03.363721    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0831 22:07:03.402638    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0831 22:07:03.411258    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0831 22:07:03.435464    8370 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0831 22:07:03.435488    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0831 22:07:03.449731    8370 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0831 22:07:03.449759    8370 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0831 22:07:03.501714    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0831 22:07:03.530239    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0831 22:07:03.550453    8370 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0831 22:07:03.550476    8370 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0831 22:07:03.554280    8370 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0831 22:07:03.554300    8370 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0831 22:07:03.558570    8370 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0831 22:07:03.558593    8370 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0831 22:07:03.578540    8370 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0831 22:07:03.578564    8370 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0831 22:07:03.583905    8370 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0831 22:07:03.583930    8370 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0831 22:07:03.717552    8370 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:07:03.717586    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0831 22:07:03.814989    8370 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:07:03.815015    8370 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0831 22:07:03.908819    8370 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0831 22:07:03.908850    8370 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0831 22:07:03.942220    8370 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0831 22:07:03.942244    8370 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0831 22:07:03.988580    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0831 22:07:03.998544    8370 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0831 22:07:03.998572    8370 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0831 22:07:04.028405    8370 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0831 22:07:04.028433    8370 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0831 22:07:04.176449    8370 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0831 22:07:04.176476    8370 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0831 22:07:04.236821    8370 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0831 22:07:04.236847    8370 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0831 22:07:04.293654    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0831 22:07:04.329743    8370 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0831 22:07:04.329814    8370 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0831 22:07:04.353601    8370 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0831 22:07:04.353680    8370 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0831 22:07:04.363428    8370 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0831 22:07:04.363501    8370 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0831 22:07:04.551368    8370 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0831 22:07:04.551454    8370 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0831 22:07:04.593551    8370 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0831 22:07:04.593623    8370 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0831 22:07:04.629748    8370 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:04.629825    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0831 22:07:04.756680    8370 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:07:04.756751    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0831 22:07:04.808565    8370 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0831 22:07:04.808636    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0831 22:07:05.015293    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:05.045134    8370 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.086859923s)
	I0831 22:07:05.045224    8370 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.086797737s)
	I0831 22:07:05.045373    8370 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0831 22:07:05.046852    8370 node_ready.go:35] waiting up to 6m0s for node "addons-742639" to be "Ready" ...
	I0831 22:07:05.051692    8370 node_ready.go:49] node "addons-742639" has status "Ready":"True"
	I0831 22:07:05.051717    8370 node_ready.go:38] duration metric: took 4.791515ms for node "addons-742639" to be "Ready" ...
	I0831 22:07:05.051727    8370 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:07:05.068864    8370 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:05.217692    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0831 22:07:05.277333    8370 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0831 22:07:05.277358    8370 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0831 22:07:05.550891    8370 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-742639" context rescaled to 1 replicas
	I0831 22:07:05.606099    8370 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0831 22:07:05.606171    8370 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0831 22:07:05.653074    8370 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:07:05.653148    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0831 22:07:05.834037    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0831 22:07:05.973884    8370 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0831 22:07:05.973949    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0831 22:07:06.216157    8370 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0831 22:07:06.216217    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0831 22:07:06.657003    8370 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:07:06.657076    8370 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0831 22:07:07.053539    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0831 22:07:07.093506    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:07.917737    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.682594919s)
	I0831 22:07:07.917842    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.603209897s)
	I0831 22:07:07.917913    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.554174717s)
	I0831 22:07:07.918419    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.689306995s)
	I0831 22:07:08.310395    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.907723466s)
	I0831 22:07:08.310486    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.899204354s)
	I0831 22:07:09.074871    8370 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0831 22:07:09.075017    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:09.094264    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:09.100764    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:09.995439    8370 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0831 22:07:10.479982    8370 addons.go:234] Setting addon gcp-auth=true in "addons-742639"
	I0831 22:07:10.480049    8370 host.go:66] Checking if "addons-742639" exists ...
	I0831 22:07:10.480571    8370 cli_runner.go:164] Run: docker container inspect addons-742639 --format={{.State.Status}}
	I0831 22:07:10.507349    8370 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0831 22:07:10.507413    8370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-742639
	I0831 22:07:10.533622    8370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/addons-742639/id_rsa Username:docker}
	I0831 22:07:11.107672    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:13.632496    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:14.746999    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.245248076s)
	I0831 22:07:14.747158    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.216881668s)
	I0831 22:07:14.747175    8370 addons.go:475] Verifying addon ingress=true in "addons-742639"
	I0831 22:07:14.747314    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.758705839s)
	I0831 22:07:14.747332    8370 addons.go:475] Verifying addon registry=true in "addons-742639"
	I0831 22:07:14.747661    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.45392197s)
	I0831 22:07:14.747682    8370 addons.go:475] Verifying addon metrics-server=true in "addons-742639"
	I0831 22:07:14.747765    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.732397673s)
	W0831 22:07:14.747790    8370 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:07:14.747805    8370 retry.go:31] will retry after 293.497062ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0831 22:07:14.747850    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.530081246s)
	I0831 22:07:14.747967    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.913855087s)
	I0831 22:07:14.749234    8370 out.go:177] * Verifying ingress addon...
	I0831 22:07:14.750886    8370 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-742639 service yakd-dashboard -n yakd-dashboard
	
	I0831 22:07:14.750930    8370 out.go:177] * Verifying registry addon...
	I0831 22:07:14.751895    8370 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0831 22:07:14.753812    8370 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0831 22:07:14.807419    8370 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0831 22:07:14.807441    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:14.807868    8370 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0831 22:07:14.807888    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:15.041951    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0831 22:07:15.281902    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:15.283306    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:15.632448    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.57881037s)
	I0831 22:07:15.632496    8370 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-742639"
	I0831 22:07:15.632693    8370 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.125315469s)
	I0831 22:07:15.635605    8370 out.go:177] * Verifying csi-hostpath-driver addon...
	I0831 22:07:15.635671    8370 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0831 22:07:15.640108    8370 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0831 22:07:15.642911    8370 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0831 22:07:15.645508    8370 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0831 22:07:15.645541    8370 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0831 22:07:15.650879    8370 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0831 22:07:15.650963    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:15.757704    8370 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0831 22:07:15.757785    8370 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0831 22:07:15.785442    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:15.786377    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:15.801864    8370 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:07:15.801888    8370 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0831 22:07:15.850746    8370 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0831 22:07:16.077549    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:16.145784    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:16.263692    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:16.265097    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:16.645505    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:16.756244    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:16.758355    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:17.145384    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:17.261566    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:17.262694    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:17.417192    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.375199565s)
	I0831 22:07:17.417266    8370 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.566496804s)
	I0831 22:07:17.420692    8370 addons.go:475] Verifying addon gcp-auth=true in "addons-742639"
	I0831 22:07:17.426507    8370 out.go:177] * Verifying gcp-auth addon...
	I0831 22:07:17.430018    8370 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0831 22:07:17.432650    8370 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:07:17.644794    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:17.756506    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:17.759198    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:18.145455    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:18.261067    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:18.263310    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:18.575519    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:18.644959    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:18.757027    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:18.758623    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:19.148241    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:19.259692    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:19.260943    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:19.645818    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:19.756771    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:19.757705    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:20.147578    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:20.267396    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:20.271423    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:20.577758    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:20.645650    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:20.757887    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:20.762050    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:21.145645    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:21.256403    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:21.260789    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:21.647929    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:21.757285    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:21.758147    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:22.146436    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:22.260575    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:22.262284    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:22.644691    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:22.757892    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:22.760055    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:23.075689    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:23.145817    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:23.268835    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:23.270850    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:23.645913    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:23.756795    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:23.758665    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:24.146085    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:24.262662    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:24.263266    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:24.645339    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:24.758283    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:24.759460    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:25.076337    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:25.144976    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:25.257507    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:25.259188    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:25.645995    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:25.757671    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:25.759855    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:26.145051    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:26.265317    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:26.266472    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:26.645395    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:26.758534    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:26.758727    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:27.146670    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:27.263818    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:27.265553    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:27.575476    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:27.645394    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:27.756461    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:27.758090    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:28.144989    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:28.260437    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:28.262885    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:28.646164    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:28.758417    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:28.759345    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:29.147514    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:29.262351    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:29.263802    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:29.575916    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:29.646456    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:29.756788    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:29.759627    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:30.154992    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:30.262403    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:30.263391    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:30.645570    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:30.758774    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:30.759954    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:31.145287    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:31.263179    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:31.265449    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:31.646869    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:31.756558    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:31.759993    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:32.075598    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:32.145670    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:32.263424    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:32.265637    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:32.645704    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:32.756283    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:32.759983    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:33.145504    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:33.261317    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:33.261538    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:33.644983    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:33.758902    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:33.759526    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:34.076114    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:34.145942    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:34.262005    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:34.263947    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:34.645362    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:34.759001    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:34.765022    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:35.145989    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.264265    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:35.265548    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:35.645406    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:35.758567    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:35.759479    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:36.146102    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:36.256834    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:36.258995    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:36.575487    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:36.644829    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:36.757150    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:36.758560    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:37.144844    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:37.261403    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:37.262870    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:37.646494    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:37.756908    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:37.758662    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:38.145021    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:38.268255    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0831 22:07:38.269182    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:38.576174    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:38.645934    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:38.758294    8370 kapi.go:107] duration metric: took 24.004480064s to wait for kubernetes.io/minikube-addons=registry ...
	I0831 22:07:38.759651    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:39.148617    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:39.263125    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:39.644796    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:39.756471    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:40.145521    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:40.262342    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:40.645810    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:40.756898    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:41.076493    8370 pod_ready.go:103] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"False"
	I0831 22:07:41.147045    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:41.258771    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:41.652688    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:41.759494    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:42.083835    8370 pod_ready.go:93] pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:42.083923    8370 pod_ready.go:82] duration metric: took 37.014974353s for pod "coredns-6f6b679f8f-2r28h" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:42.083962    8370 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-lkz6h" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:42.086529    8370 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-lkz6h" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-lkz6h" not found
	I0831 22:07:42.086601    8370 pod_ready.go:82] duration metric: took 2.590541ms for pod "coredns-6f6b679f8f-lkz6h" in "kube-system" namespace to be "Ready" ...
	E0831 22:07:42.086628    8370 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-lkz6h" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-lkz6h" not found
	I0831 22:07:42.086650    8370 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-742639" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:42.106482    8370 pod_ready.go:93] pod "etcd-addons-742639" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:42.106570    8370 pod_ready.go:82] duration metric: took 19.878465ms for pod "etcd-addons-742639" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:42.106598    8370 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-742639" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:42.119041    8370 pod_ready.go:93] pod "kube-apiserver-addons-742639" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:42.119233    8370 pod_ready.go:82] duration metric: took 12.596344ms for pod "kube-apiserver-addons-742639" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:42.119267    8370 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-742639" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:42.133512    8370 pod_ready.go:93] pod "kube-controller-manager-addons-742639" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:42.133603    8370 pod_ready.go:82] duration metric: took 14.312454ms for pod "kube-controller-manager-addons-742639" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:42.133670    8370 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-crbmm" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:42.147815    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:42.266838    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:42.278057    8370 pod_ready.go:93] pod "kube-proxy-crbmm" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:42.278089    8370 pod_ready.go:82] duration metric: took 144.389156ms for pod "kube-proxy-crbmm" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:42.278101    8370 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-742639" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:42.647129    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:42.673026    8370 pod_ready.go:93] pod "kube-scheduler-addons-742639" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:42.673052    8370 pod_ready.go:82] duration metric: took 394.943069ms for pod "kube-scheduler-addons-742639" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:42.673063    8370 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gclmc" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:42.758057    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:43.073063    8370 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-gclmc" in "kube-system" namespace has status "Ready":"True"
	I0831 22:07:43.073087    8370 pod_ready.go:82] duration metric: took 400.016313ms for pod "nvidia-device-plugin-daemonset-gclmc" in "kube-system" namespace to be "Ready" ...
	I0831 22:07:43.073097    8370 pod_ready.go:39] duration metric: took 38.021357567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0831 22:07:43.073116    8370 api_server.go:52] waiting for apiserver process to appear ...
	I0831 22:07:43.073179    8370 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:07:43.091361    8370 api_server.go:72] duration metric: took 41.239591461s to wait for apiserver process to appear ...
	I0831 22:07:43.091405    8370 api_server.go:88] waiting for apiserver healthz status ...
	I0831 22:07:43.091429    8370 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0831 22:07:43.100329    8370 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0831 22:07:43.101398    8370 api_server.go:141] control plane version: v1.31.0
	I0831 22:07:43.101419    8370 api_server.go:131] duration metric: took 10.006313ms to wait for apiserver health ...
	I0831 22:07:43.101428    8370 system_pods.go:43] waiting for kube-system pods to appear ...
	I0831 22:07:43.144847    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:43.261446    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:43.279893    8370 system_pods.go:59] 17 kube-system pods found
	I0831 22:07:43.279930    8370 system_pods.go:61] "coredns-6f6b679f8f-2r28h" [588149a7-2251-4e0a-a958-ffe0e095088b] Running
	I0831 22:07:43.279939    8370 system_pods.go:61] "csi-hostpath-attacher-0" [698bbcb4-8ce9-4da2-8a49-15e396d9f92b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0831 22:07:43.279947    8370 system_pods.go:61] "csi-hostpath-resizer-0" [a6924b64-9b50-4c59-8d42-fe6dd2db1b23] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0831 22:07:43.279968    8370 system_pods.go:61] "csi-hostpathplugin-nrkgb" [c7a8346a-9322-4c6f-be30-0e45911a4e72] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0831 22:07:43.279974    8370 system_pods.go:61] "etcd-addons-742639" [6cdba371-1b59-4da0-aa9f-9071f4966541] Running
	I0831 22:07:43.279979    8370 system_pods.go:61] "kube-apiserver-addons-742639" [9dbac6d8-3bbc-40d4-8455-1f9611612369] Running
	I0831 22:07:43.279983    8370 system_pods.go:61] "kube-controller-manager-addons-742639" [9c17351a-2106-4f37-8202-3270efdde2f9] Running
	I0831 22:07:43.279987    8370 system_pods.go:61] "kube-ingress-dns-minikube" [92fa896f-5bb8-4cc6-98fb-6595f2c74d0b] Running
	I0831 22:07:43.279991    8370 system_pods.go:61] "kube-proxy-crbmm" [62c502a7-5e61-4598-a88c-7cf21f445aa8] Running
	I0831 22:07:43.279995    8370 system_pods.go:61] "kube-scheduler-addons-742639" [9b01b250-ec67-4502-b264-4c3cb6aed01e] Running
	I0831 22:07:43.280001    8370 system_pods.go:61] "metrics-server-84c5f94fbc-v5wlx" [6c6e14f0-a408-480e-9c81-b11d7f1f96f0] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0831 22:07:43.280007    8370 system_pods.go:61] "nvidia-device-plugin-daemonset-gclmc" [5cbce86e-0fc2-4aed-80c5-66b21a417eb6] Running
	I0831 22:07:43.280012    8370 system_pods.go:61] "registry-6fb4cdfc84-25jhq" [195b1392-2aad-40ff-a44b-0641056727a1] Running
	I0831 22:07:43.280016    8370 system_pods.go:61] "registry-proxy-jxqpd" [4752523a-ac3b-4bb6-8199-9fb816d49c87] Running
	I0831 22:07:43.280023    8370 system_pods.go:61] "snapshot-controller-56fcc65765-hsfrc" [8dde4145-05a9-4b0b-894d-f5b5eaf34be3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:07:43.280030    8370 system_pods.go:61] "snapshot-controller-56fcc65765-vdwxh" [a56a2f44-4608-49c1-8ce6-fbd0cd6b6d0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:07:43.280039    8370 system_pods.go:61] "storage-provisioner" [c57f7ec3-221f-4a79-a0d9-b8d7235eb1df] Running
	I0831 22:07:43.280045    8370 system_pods.go:74] duration metric: took 178.612264ms to wait for pod list to return data ...
	I0831 22:07:43.280058    8370 default_sa.go:34] waiting for default service account to be created ...
	I0831 22:07:43.489862    8370 default_sa.go:45] found service account: "default"
	I0831 22:07:43.489888    8370 default_sa.go:55] duration metric: took 209.823241ms for default service account to be created ...
	I0831 22:07:43.489898    8370 system_pods.go:116] waiting for k8s-apps to be running ...
	I0831 22:07:43.647392    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:43.751059    8370 system_pods.go:86] 17 kube-system pods found
	I0831 22:07:43.755562    8370 system_pods.go:89] "coredns-6f6b679f8f-2r28h" [588149a7-2251-4e0a-a958-ffe0e095088b] Running
	I0831 22:07:43.755594    8370 system_pods.go:89] "csi-hostpath-attacher-0" [698bbcb4-8ce9-4da2-8a49-15e396d9f92b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0831 22:07:43.755633    8370 system_pods.go:89] "csi-hostpath-resizer-0" [a6924b64-9b50-4c59-8d42-fe6dd2db1b23] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0831 22:07:43.755662    8370 system_pods.go:89] "csi-hostpathplugin-nrkgb" [c7a8346a-9322-4c6f-be30-0e45911a4e72] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0831 22:07:43.755682    8370 system_pods.go:89] "etcd-addons-742639" [6cdba371-1b59-4da0-aa9f-9071f4966541] Running
	I0831 22:07:43.755703    8370 system_pods.go:89] "kube-apiserver-addons-742639" [9dbac6d8-3bbc-40d4-8455-1f9611612369] Running
	I0831 22:07:43.755735    8370 system_pods.go:89] "kube-controller-manager-addons-742639" [9c17351a-2106-4f37-8202-3270efdde2f9] Running
	I0831 22:07:43.755758    8370 system_pods.go:89] "kube-ingress-dns-minikube" [92fa896f-5bb8-4cc6-98fb-6595f2c74d0b] Running
	I0831 22:07:43.755778    8370 system_pods.go:89] "kube-proxy-crbmm" [62c502a7-5e61-4598-a88c-7cf21f445aa8] Running
	I0831 22:07:43.755798    8370 system_pods.go:89] "kube-scheduler-addons-742639" [9b01b250-ec67-4502-b264-4c3cb6aed01e] Running
	I0831 22:07:43.755817    8370 system_pods.go:89] "metrics-server-84c5f94fbc-v5wlx" [6c6e14f0-a408-480e-9c81-b11d7f1f96f0] Running
	I0831 22:07:43.755845    8370 system_pods.go:89] "nvidia-device-plugin-daemonset-gclmc" [5cbce86e-0fc2-4aed-80c5-66b21a417eb6] Running
	I0831 22:07:43.755868    8370 system_pods.go:89] "registry-6fb4cdfc84-25jhq" [195b1392-2aad-40ff-a44b-0641056727a1] Running
	I0831 22:07:43.755886    8370 system_pods.go:89] "registry-proxy-jxqpd" [4752523a-ac3b-4bb6-8199-9fb816d49c87] Running
	I0831 22:07:43.755909    8370 system_pods.go:89] "snapshot-controller-56fcc65765-hsfrc" [8dde4145-05a9-4b0b-894d-f5b5eaf34be3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:07:43.755945    8370 system_pods.go:89] "snapshot-controller-56fcc65765-vdwxh" [a56a2f44-4608-49c1-8ce6-fbd0cd6b6d0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0831 22:07:43.755966    8370 system_pods.go:89] "storage-provisioner" [c57f7ec3-221f-4a79-a0d9-b8d7235eb1df] Running
	I0831 22:07:43.755988    8370 system_pods.go:126] duration metric: took 266.082884ms to wait for k8s-apps to be running ...
	I0831 22:07:43.756007    8370 system_svc.go:44] waiting for kubelet service to be running ....
	I0831 22:07:43.756088    8370 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:07:43.756424    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:43.773152    8370 system_svc.go:56] duration metric: took 17.135155ms WaitForService to wait for kubelet
	I0831 22:07:43.773225    8370 kubeadm.go:582] duration metric: took 41.921459313s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0831 22:07:43.773258    8370 node_conditions.go:102] verifying NodePressure condition ...
	I0831 22:07:43.872875    8370 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0831 22:07:43.872955    8370 node_conditions.go:123] node cpu capacity is 2
	I0831 22:07:43.872980    8370 node_conditions.go:105] duration metric: took 99.703737ms to run NodePressure ...
	I0831 22:07:43.873006    8370 start.go:241] waiting for startup goroutines ...
	I0831 22:07:44.145138    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:44.263428    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:44.645873    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:44.756970    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:45.153171    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:45.261760    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:45.646056    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:45.756646    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:46.145380    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:46.262269    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:46.645559    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:46.756890    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:47.145712    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:47.265570    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:47.647476    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:47.757582    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:48.146551    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:48.269733    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:48.645612    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:48.757665    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:49.155000    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:49.261745    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:49.651697    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:49.757702    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:50.146063    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:50.262750    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:50.645386    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:50.757145    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:51.146450    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:51.257382    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:51.645944    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:51.766685    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:52.145820    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:52.260998    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:52.644975    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:52.756635    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:53.147057    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:53.256398    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:53.644968    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:53.756089    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:54.147269    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:54.274886    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:54.645780    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:54.757329    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:55.146030    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:55.256972    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:55.663112    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:55.757154    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:56.148476    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:56.283501    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:56.644619    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:56.756798    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:57.146278    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:57.262362    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:57.645124    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:57.757023    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:58.145524    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:58.265938    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:58.645186    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:58.757201    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:59.146680    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:59.262522    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:07:59.644910    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:07:59.756632    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:00.209483    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:00.305779    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:00.654989    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:00.756513    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:01.145937    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:01.258240    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:01.645047    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:01.756158    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:02.144396    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:02.268717    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:02.645840    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:02.756522    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:03.145073    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:03.261972    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:03.667521    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:03.761647    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:04.145216    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:04.261587    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:04.647018    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:04.758043    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:05.145196    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:05.261679    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:05.645618    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:05.756692    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:06.145102    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:06.260009    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:06.645972    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:06.756550    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:07.145189    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:07.259561    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:07.644751    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:07.756793    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:08.145532    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:08.257775    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:08.646456    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:08.755690    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:09.148901    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:09.263055    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:09.645161    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:09.756656    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:10.145872    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:10.255925    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:10.645089    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:10.755784    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:11.156987    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:11.258661    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:11.645602    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0831 22:08:11.757334    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:12.145718    8370 kapi.go:107] duration metric: took 56.505608429s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0831 22:08:12.256954    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:12.756687    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:13.256605    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:13.757402    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:14.262565    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:14.756038    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:15.259372    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:15.756241    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:16.263199    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:16.756320    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:17.257880    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:17.757185    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:18.256965    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:18.761666    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:19.262493    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:19.756200    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:20.257371    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:20.758229    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:21.261610    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:21.758457    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:22.264608    8370 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0831 22:08:22.774877    8370 kapi.go:107] duration metric: took 1m8.02297811s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0831 22:08:40.436992    8370 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0831 22:08:40.437017    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:40.933884    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:41.433317    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:41.934369    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:42.433392    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:42.934454    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:43.433169    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:43.934632    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:44.433628    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:44.934455    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:45.434706    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:45.933764    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:46.433748    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:46.934126    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:47.433611    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:47.933621    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:48.433509    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:48.933460    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:49.434033    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:49.934220    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:50.434542    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:50.933505    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:51.434285    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:51.935835    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:52.434676    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:52.933357    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:53.433295    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:53.934023    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:54.433563    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:54.933849    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:55.433377    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:55.933318    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:56.433826    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:56.933495    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:57.433286    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:57.933841    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:58.433751    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:58.935061    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:59.434052    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:08:59.933330    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:00.441810    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:00.933883    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:01.433797    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:01.933612    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:02.433602    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:02.933593    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:03.433189    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:03.933932    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:04.434574    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:04.933750    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:05.433454    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:05.934110    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:06.433274    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:06.933740    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:07.433230    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:07.933631    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:08.433218    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:08.933442    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:09.434663    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:09.934091    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:10.432913    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:10.933602    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:11.432938    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:11.934431    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:12.433523    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:12.934206    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:13.433960    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:13.933469    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:14.433998    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:14.933587    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:15.433427    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:15.934107    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:16.433763    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:16.935822    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:17.434384    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:17.933668    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:18.433268    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:18.933547    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:19.433794    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:19.933989    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:20.433023    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:20.934273    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:21.433446    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:21.933492    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:22.433089    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:22.933460    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:23.434851    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:23.934857    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:24.433742    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:24.933574    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:25.433251    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:25.934093    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:26.433791    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:26.934514    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:27.433353    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:27.934076    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:28.434242    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:28.934204    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:29.434437    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:29.933035    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:30.433851    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:30.934079    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:31.434099    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:31.944283    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:32.433768    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:32.934475    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:33.433148    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:33.934040    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:34.434065    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:34.934468    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:35.433195    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:35.934358    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:36.433494    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:36.934025    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:37.433577    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:37.933660    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:38.434126    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:38.934135    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:39.434070    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:39.933643    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:40.433886    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:40.937085    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:41.436146    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:41.934238    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:42.433919    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:42.933700    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:43.433675    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:43.933190    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:44.434489    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:44.933390    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:45.435843    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:45.933434    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:46.433391    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:46.934586    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:47.434452    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:47.933519    8370 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0831 22:09:48.433761    8370 kapi.go:107] duration metric: took 2m31.003741312s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0831 22:09:48.435420    8370 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-742639 cluster.
	I0831 22:09:48.436976    8370 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0831 22:09:48.438663    8370 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0831 22:09:48.440464    8370 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, storage-provisioner, default-storageclass, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0831 22:09:48.443888    8370 addons.go:510] duration metric: took 2m46.59184104s for enable addons: enabled=[cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner-rancher storage-provisioner default-storageclass volcano metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0831 22:09:48.443944    8370 start.go:246] waiting for cluster config update ...
	I0831 22:09:48.443965    8370 start.go:255] writing updated cluster config ...
	I0831 22:09:48.444246    8370 ssh_runner.go:195] Run: rm -f paused
	I0831 22:09:48.773230    8370 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0831 22:09:48.775772    8370 out.go:177] * Done! kubectl is now configured to use "addons-742639" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 31 22:19:29 addons-742639 dockerd[1282]: time="2024-08-31T22:19:29.517966940Z" level=info msg="ignoring event" container=7c341d28cb318958e3027d8a3887e35ab746d6ddc3a9852af50d5d6b0e6a4174 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:29 addons-742639 dockerd[1282]: time="2024-08-31T22:19:29.535613157Z" level=info msg="ignoring event" container=9308bcf2bbf31d9430e246e12367d7264a0d16c806bd96f1d56d3fd6194bab2e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:29 addons-742639 dockerd[1282]: time="2024-08-31T22:19:29.535678961Z" level=info msg="ignoring event" container=36e4c24fb274b8bb7295c175d71f4b2466a304f517812e6a6a7b2dd5474f84cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:29 addons-742639 dockerd[1282]: time="2024-08-31T22:19:29.549616005Z" level=info msg="ignoring event" container=2732a4eb407ae66f791550b3dbce5b919c8c5877aea8288b58cac87b77363bbf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:29 addons-742639 dockerd[1282]: time="2024-08-31T22:19:29.615719476Z" level=info msg="ignoring event" container=e4d458d8d3ddb94beb2fa1c8c17435e554c3e987c72079c8ac119d1d357052fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:29 addons-742639 dockerd[1282]: time="2024-08-31T22:19:29.625485100Z" level=info msg="ignoring event" container=58ce98f42048d59731bf6053cd0680b2e77533e27b8d1d311b56f22c09c8a0ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:29 addons-742639 dockerd[1282]: time="2024-08-31T22:19:29.755923908Z" level=info msg="ignoring event" container=0c800b7a5ae5218c9ee9b2e693247444775514a7e8d7b2bcc154e59349aa420c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:29 addons-742639 dockerd[1282]: time="2024-08-31T22:19:29.879458538Z" level=info msg="ignoring event" container=4c758f12cc164b11e3e613f68f19df1a3f505c12d8e99e478b070dbfcb9f1dc7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:29 addons-742639 dockerd[1282]: time="2024-08-31T22:19:29.917931126Z" level=info msg="ignoring event" container=0c144dadbd85490527bd0ba1344fbea55187a8ef6fc22ac4d9cd0104c4b7338b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:36 addons-742639 dockerd[1282]: time="2024-08-31T22:19:36.136959706Z" level=info msg="ignoring event" container=2d6ea90e71a82312a0cd1b5d99ee1b0bcceb1983b3b9b01d84aeca869b30ef52 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:36 addons-742639 dockerd[1282]: time="2024-08-31T22:19:36.162241047Z" level=info msg="ignoring event" container=d46667203dc6551b00b6318bea0e64cedab768d8c9e7769e06f096e557daa07a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:36 addons-742639 dockerd[1282]: time="2024-08-31T22:19:36.315704798Z" level=info msg="ignoring event" container=0af4e48af1e9175344a68a9c5d0374e976df6e5961e1fe55ca6da038f7ab31e2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:36 addons-742639 dockerd[1282]: time="2024-08-31T22:19:36.362299651Z" level=info msg="ignoring event" container=c1a8bc514d0e93071a2beb53ebae39a70c9d94297c9e4ffae19b25b098803655 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:42 addons-742639 dockerd[1282]: time="2024-08-31T22:19:42.727958473Z" level=info msg="ignoring event" container=8d0be9a950d68e78dad08b0f746b166d236e1fb8790527361e918c3e3ba440dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:42 addons-742639 dockerd[1282]: time="2024-08-31T22:19:42.917425134Z" level=info msg="ignoring event" container=8ab1e7cc6e598d497aa861718de5d93e32f37801991df073a720ba6e0e6c9927 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:43 addons-742639 cri-dockerd[1540]: time="2024-08-31T22:19:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fd70de4fc7f398447bf4cdb392c0cb13c8c4f59130a936214d1b9ec8e5a05ab7/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Aug 31 22:19:43 addons-742639 dockerd[1282]: time="2024-08-31T22:19:43.855526966Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Aug 31 22:19:44 addons-742639 dockerd[1282]: time="2024-08-31T22:19:44.078722111Z" level=info msg="ignoring event" container=f651d67d553b8e08c5b3c54c6d51b2e0c62bd344308e8ff70fc74f8f22f9a3d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:44 addons-742639 cri-dockerd[1540]: time="2024-08-31T22:19:44Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Aug 31 22:19:44 addons-742639 dockerd[1282]: time="2024-08-31T22:19:44.710049297Z" level=info msg="ignoring event" container=cecb65aaf17736ce5c98fed10737d7591a5381b053b9f06591bc1b68af765300 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:44 addons-742639 dockerd[1282]: time="2024-08-31T22:19:44.976159300Z" level=info msg="ignoring event" container=86c614f3b486b03046228249628ecc03d7ce3d2ad3957f307849549a4e200369 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:45 addons-742639 dockerd[1282]: time="2024-08-31T22:19:45.065501694Z" level=info msg="ignoring event" container=101da61670e9495b1090b53df81a8e9968a0de943e20687386b88f8595c745c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:45 addons-742639 dockerd[1282]: time="2024-08-31T22:19:45.332399905Z" level=info msg="ignoring event" container=d2f4a9b73c7b570bd71bcf2701cf45fa64b08968dbfdfbd525c87ec4106069e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:45 addons-742639 dockerd[1282]: time="2024-08-31T22:19:45.448204778Z" level=info msg="ignoring event" container=cc856f4dd8e52b87a6d820a23db7d9448540c3fe0880696357c2bb83be768552 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 31 22:19:46 addons-742639 dockerd[1282]: time="2024-08-31T22:19:46.014500348Z" level=info msg="ignoring event" container=fd70de4fc7f398447bf4cdb392c0cb13c8c4f59130a936214d1b9ec8e5a05ab7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cecb65aaf1773       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              2 seconds ago       Exited              helper-pod                0                   fd70de4fc7f39       helper-pod-create-pvc-77e56be0-b431-4714-8471-5267667c9b66
	e84fa6017a3e8       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc            38 seconds ago      Exited              gadget                    7                   3ea6beb794b0f       gadget-97mp2
	d6d1b8bd570b2       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   e54d308eedc8f       gcp-auth-89d5ffd79-6mzss
	30df4cea2a57d       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                0                   e513607255f32       ingress-nginx-controller-bc57996ff-7brrr
	5823cda0cbf31       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   394ca6d49dd5d       ingress-nginx-admission-patch-42nm7
	549eaa46f6b07       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   24e72630b317f       ingress-nginx-admission-create-rs7wx
	b39dbb8a2645c       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server            0                   0d85706787fb0       metrics-server-84c5f94fbc-v5wlx
	4b9b3cc9495d1       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   1f113edac2ecc       local-path-provisioner-86d989889c-l28kq
	d4c7703dbfff6       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator    0                   d1e5de0f770f8       cloud-spanner-emulator-769b77f747-95jzp
	161fb05ff0548       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns      0                   de8f884cd5ff5       kube-ingress-dns-minikube
	08d38a5044400       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner       0                   5c9199c4513a1       storage-provisioner
	3b4bf205969c1       2437cf7621777                                                                                                                12 minutes ago      Running             coredns                   0                   1ccdb858a1ec4       coredns-6f6b679f8f-2r28h
	0e656117fcb20       71d55d66fd4ee                                                                                                                12 minutes ago      Running             kube-proxy                0                   7b434c24d9fa0       kube-proxy-crbmm
	11806c7fbbee3       fbbbd428abb4d                                                                                                                12 minutes ago      Running             kube-scheduler            0                   cf43a103b9394       kube-scheduler-addons-742639
	45afdf3ed5418       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                      0                   ca4f638920df6       etcd-addons-742639
	51a1ab674a7f5       fcb0683e6bdbd                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   4778d13a3e59a       kube-controller-manager-addons-742639
	42b33a489ab3a       cd0f0ae0ec9e0                                                                                                                12 minutes ago      Running             kube-apiserver            0                   a9b635f65bd33       kube-apiserver-addons-742639
	
	
	==> controller_ingress [30df4cea2a57] <==
	W0831 22:08:21.805368       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0831 22:08:21.805501       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0831 22:08:21.815624       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.0" state="clean" commit="9edcffcde5595e8a5b1a35f88c421764e575afce" platform="linux/arm64"
	I0831 22:08:22.806875       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0831 22:08:22.822933       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0831 22:08:22.832494       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0831 22:08:22.848890       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"7208dd6f-5dae-49a5-8359-75f7dd29a366", APIVersion:"v1", ResourceVersion:"701", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0831 22:08:22.849890       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"ffa31d01-aa28-48b2-81f2-3febcceff7a4", APIVersion:"v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0831 22:08:22.850089       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"44595eb7-ccee-408a-ad80-23b68017b3c9", APIVersion:"v1", ResourceVersion:"703", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0831 22:08:24.034182       7 nginx.go:317] "Starting NGINX process"
	I0831 22:08:24.034741       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0831 22:08:24.034961       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0831 22:08:24.036635       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0831 22:08:24.058439       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0831 22:08:24.058674       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-7brrr"
	I0831 22:08:24.067703       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-7brrr" node="addons-742639"
	I0831 22:08:24.079551       7 controller.go:213] "Backend successfully reloaded"
	I0831 22:08:24.079631       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0831 22:08:24.079742       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-7brrr", UID:"9ba515e0-a32a-4f94-8340-05dac0bc7d49", APIVersion:"v1", ResourceVersion:"1278", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [3b4bf205969c] <==
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 127.0.0.1:42663 - 21283 "HINFO IN 3544752163504717156.183727025932527055. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.028014272s
	[INFO] 10.244.0.7:35631 - 40065 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000322088s
	[INFO] 10.244.0.7:35631 - 7559 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000764845s
	[INFO] 10.244.0.7:34404 - 3981 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000255578s
	[INFO] 10.244.0.7:34404 - 2696 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000136769s
	[INFO] 10.244.0.7:50112 - 6278 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000161467s
	[INFO] 10.244.0.7:55240 - 45850 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011094s
	[INFO] 10.244.0.7:55240 - 20255 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066387s
	[INFO] 10.244.0.7:50112 - 42122 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.008260936s
	[INFO] 10.244.0.7:46232 - 57708 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.013583776s
	[INFO] 10.244.0.7:46232 - 11626 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.013456147s
	[INFO] 10.244.0.7:55744 - 55640 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110177s
	[INFO] 10.244.0.7:55744 - 62303 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000084118s
	[INFO] 10.244.0.25:48702 - 54745 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00042488s
	[INFO] 10.244.0.25:37353 - 40119 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000156297s
	[INFO] 10.244.0.25:34667 - 62464 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000262331s
	[INFO] 10.244.0.25:36541 - 14697 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000319536s
	[INFO] 10.244.0.25:52544 - 62133 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000136359s
	[INFO] 10.244.0.25:42438 - 56824 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000315786s
	[INFO] 10.244.0.25:58557 - 30085 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004503336s
	[INFO] 10.244.0.25:40701 - 35936 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00551878s
	[INFO] 10.244.0.25:52660 - 45116 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001708964s
	[INFO] 10.244.0.25:34300 - 22543 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001815293s
	
	
	==> describe nodes <==
	Name:               addons-742639
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-742639
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8ab9a20c866aaad18bea6fac47c5d146303457d2
	                    minikube.k8s.io/name=addons-742639
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_31T22_06_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-742639
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 31 Aug 2024 22:06:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-742639
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 31 Aug 2024 22:19:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 31 Aug 2024 22:15:37 +0000   Sat, 31 Aug 2024 22:06:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 31 Aug 2024 22:15:37 +0000   Sat, 31 Aug 2024 22:06:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 31 Aug 2024 22:15:37 +0000   Sat, 31 Aug 2024 22:06:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 31 Aug 2024 22:15:37 +0000   Sat, 31 Aug 2024 22:06:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-742639
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d45917483564fe08e50d23f1793ddd1
	  System UUID:                476bafe5-64b9-4d54-a5ee-5b2c358c9d3d
	  Boot ID:                    aeae7520-62af-425c-8cc0-8a951086001b
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  default                     cloud-spanner-emulator-769b77f747-95jzp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-97mp2                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-6mzss                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-7brrr    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-2r28h                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-742639                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-742639                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-742639       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-crbmm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-742639                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-v5wlx             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-l28kq     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-742639 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-742639 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-742639 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-742639 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-742639 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-742639 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-742639 event: Registered Node addons-742639 in Controller
	
	
	==> dmesg <==
	[Aug31 21:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014943] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.440364] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.805190] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.109689] kauditd_printk_skb: 36 callbacks suppressed
	[Aug31 22:08] hrtimer: interrupt took 44756400 ns
	
	
	==> etcd [45afdf3ed541] <==
	{"level":"info","ts":"2024-08-31T22:06:50.633776Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-31T22:06:50.633946Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-31T22:06:50.715213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-31T22:06:50.715440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-31T22:06:50.715603Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-31T22:06:50.715699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-31T22:06:50.715790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-31T22:06:50.715903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-31T22:06:50.716055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-31T22:06:50.719298Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:06:50.724397Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-742639 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-31T22:06:50.724655Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:06:50.724995Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:06:50.725141Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-31T22:06:50.724906Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:06:50.725048Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-31T22:06:50.725736Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-31T22:06:50.724920Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-31T22:06:50.726740Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:06:50.737444Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-31T22:06:50.738319Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-31T22:06:50.775236Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-31T22:16:51.574667Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1884}
	{"level":"info","ts":"2024-08-31T22:16:51.619605Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1884,"took":"44.367826ms","hash":4063641183,"current-db-size-bytes":9134080,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":4997120,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-08-31T22:16:51.619655Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4063641183,"revision":1884,"compact-revision":-1}
	
	
	==> gcp-auth [d6d1b8bd570b] <==
	2024/08/31 22:09:48 GCP Auth Webhook started!
	2024/08/31 22:10:05 Ready to marshal response ...
	2024/08/31 22:10:05 Ready to write response ...
	2024/08/31 22:10:05 Ready to marshal response ...
	2024/08/31 22:10:05 Ready to write response ...
	2024/08/31 22:10:29 Ready to marshal response ...
	2024/08/31 22:10:29 Ready to write response ...
	2024/08/31 22:10:29 Ready to marshal response ...
	2024/08/31 22:10:29 Ready to write response ...
	2024/08/31 22:10:29 Ready to marshal response ...
	2024/08/31 22:10:29 Ready to write response ...
	2024/08/31 22:18:43 Ready to marshal response ...
	2024/08/31 22:18:43 Ready to write response ...
	2024/08/31 22:18:53 Ready to marshal response ...
	2024/08/31 22:18:53 Ready to write response ...
	2024/08/31 22:19:19 Ready to marshal response ...
	2024/08/31 22:19:19 Ready to write response ...
	2024/08/31 22:19:43 Ready to marshal response ...
	2024/08/31 22:19:43 Ready to write response ...
	2024/08/31 22:19:43 Ready to marshal response ...
	2024/08/31 22:19:43 Ready to write response ...
	
	
	==> kernel <==
	 22:19:46 up  1:02,  0 users,  load average: 0.78, 0.70, 0.57
	Linux addons-742639 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [42b33a489ab3] <==
	I0831 22:10:19.801514       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0831 22:10:20.162450       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0831 22:10:20.198794       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0831 22:10:20.355332       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0831 22:10:20.437370       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0831 22:10:20.759025       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0831 22:10:20.802573       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0831 22:10:20.911876       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0831 22:10:20.912889       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0831 22:10:21.430643       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0831 22:10:21.618516       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0831 22:19:01.042442       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0831 22:19:35.791481       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:19:35.791558       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:19:35.822056       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:19:35.822110       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:19:35.833547       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:19:35.833606       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:19:35.855897       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:19:35.856205       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0831 22:19:35.945989       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0831 22:19:35.946839       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0831 22:19:36.834920       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0831 22:19:36.947238       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0831 22:19:37.007134       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [51a1ab674a7f] <==
	E0831 22:19:36.948783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0831 22:19:37.008875       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:37.713908       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:37.713949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:37.886613       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:37.886654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:38.603964       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:38.604009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:39.847787       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:39.848007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:40.640025       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:40.640069       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:40.752625       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:40.752667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:42.172555       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:42.172672       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:44.263872       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:44.263918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0831 22:19:44.894351       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="5.965µs"
	W0831 22:19:45.697207       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:45.697251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:45.962170       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:45.962229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0831 22:19:46.702054       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0831 22:19:46.702098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [0e656117fcb2] <==
	I0831 22:07:03.179566       1 server_linux.go:66] "Using iptables proxy"
	I0831 22:07:03.297250       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0831 22:07:03.297321       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0831 22:07:03.317976       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0831 22:07:03.318051       1 server_linux.go:169] "Using iptables Proxier"
	I0831 22:07:03.320513       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0831 22:07:03.320842       1 server.go:483] "Version info" version="v1.31.0"
	I0831 22:07:03.320858       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0831 22:07:03.324068       1 config.go:197] "Starting service config controller"
	I0831 22:07:03.324107       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0831 22:07:03.324137       1 config.go:104] "Starting endpoint slice config controller"
	I0831 22:07:03.324141       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0831 22:07:03.327264       1 config.go:326] "Starting node config controller"
	I0831 22:07:03.327278       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0831 22:07:03.427269       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0831 22:07:03.427325       1 shared_informer.go:320] Caches are synced for node config
	I0831 22:07:03.427339       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [11806c7fbbee] <==
	W0831 22:06:54.298559       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0831 22:06:54.300792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:54.298609       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0831 22:06:54.301043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:54.298640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0831 22:06:54.298673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0831 22:06:54.301387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:54.298705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0831 22:06:54.301648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:54.298740       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0831 22:06:54.301884       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:54.298842       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0831 22:06:54.302114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:54.298869       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0831 22:06:54.302314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:54.298900       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0831 22:06:54.302575       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:54.299197       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0831 22:06:54.302826       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0831 22:06:54.300150       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0831 22:06:54.303057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0831 22:06:54.300464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0831 22:06:54.303281       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0831 22:06:54.303442       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0831 22:06:55.793184       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 31 22:19:45 addons-742639 kubelet[2363]: I0831 22:19:45.697515    2363 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-99z2f\" (UniqueName: \"kubernetes.io/projected/4752523a-ac3b-4bb6-8199-9fb816d49c87-kube-api-access-99z2f\") on node \"addons-742639\" DevicePath \"\""
	Aug 31 22:19:45 addons-742639 kubelet[2363]: I0831 22:19:45.854689    2363 scope.go:117] "RemoveContainer" containerID="101da61670e9495b1090b53df81a8e9968a0de943e20687386b88f8595c745c3"
	Aug 31 22:19:45 addons-742639 kubelet[2363]: I0831 22:19:45.921187    2363 scope.go:117] "RemoveContainer" containerID="101da61670e9495b1090b53df81a8e9968a0de943e20687386b88f8595c745c3"
	Aug 31 22:19:45 addons-742639 kubelet[2363]: E0831 22:19:45.923449    2363 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 101da61670e9495b1090b53df81a8e9968a0de943e20687386b88f8595c745c3" containerID="101da61670e9495b1090b53df81a8e9968a0de943e20687386b88f8595c745c3"
	Aug 31 22:19:45 addons-742639 kubelet[2363]: I0831 22:19:45.923492    2363 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"101da61670e9495b1090b53df81a8e9968a0de943e20687386b88f8595c745c3"} err="failed to get container status \"101da61670e9495b1090b53df81a8e9968a0de943e20687386b88f8595c745c3\": rpc error: code = Unknown desc = Error response from daemon: No such container: 101da61670e9495b1090b53df81a8e9968a0de943e20687386b88f8595c745c3"
	Aug 31 22:19:45 addons-742639 kubelet[2363]: I0831 22:19:45.923519    2363 scope.go:117] "RemoveContainer" containerID="86c614f3b486b03046228249628ecc03d7ce3d2ad3957f307849549a4e200369"
	Aug 31 22:19:45 addons-742639 kubelet[2363]: I0831 22:19:45.948028    2363 scope.go:117] "RemoveContainer" containerID="86c614f3b486b03046228249628ecc03d7ce3d2ad3957f307849549a4e200369"
	Aug 31 22:19:45 addons-742639 kubelet[2363]: E0831 22:19:45.948925    2363 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 86c614f3b486b03046228249628ecc03d7ce3d2ad3957f307849549a4e200369" containerID="86c614f3b486b03046228249628ecc03d7ce3d2ad3957f307849549a4e200369"
	Aug 31 22:19:45 addons-742639 kubelet[2363]: I0831 22:19:45.948976    2363 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"86c614f3b486b03046228249628ecc03d7ce3d2ad3957f307849549a4e200369"} err="failed to get container status \"86c614f3b486b03046228249628ecc03d7ce3d2ad3957f307849549a4e200369\": rpc error: code = Unknown desc = Error response from daemon: No such container: 86c614f3b486b03046228249628ecc03d7ce3d2ad3957f307849549a4e200369"
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.200830    2363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/ef0a0832-1343-48ff-b514-0200092101b1-data\") pod \"ef0a0832-1343-48ff-b514-0200092101b1\" (UID: \"ef0a0832-1343-48ff-b514-0200092101b1\") "
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.200883    2363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ef0a0832-1343-48ff-b514-0200092101b1-gcp-creds\") pod \"ef0a0832-1343-48ff-b514-0200092101b1\" (UID: \"ef0a0832-1343-48ff-b514-0200092101b1\") "
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.200913    2363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/ef0a0832-1343-48ff-b514-0200092101b1-script\") pod \"ef0a0832-1343-48ff-b514-0200092101b1\" (UID: \"ef0a0832-1343-48ff-b514-0200092101b1\") "
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.200951    2363 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq6w9\" (UniqueName: \"kubernetes.io/projected/ef0a0832-1343-48ff-b514-0200092101b1-kube-api-access-nq6w9\") pod \"ef0a0832-1343-48ff-b514-0200092101b1\" (UID: \"ef0a0832-1343-48ff-b514-0200092101b1\") "
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.201361    2363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef0a0832-1343-48ff-b514-0200092101b1-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "ef0a0832-1343-48ff-b514-0200092101b1" (UID: "ef0a0832-1343-48ff-b514-0200092101b1"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.201397    2363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef0a0832-1343-48ff-b514-0200092101b1-data" (OuterVolumeSpecName: "data") pod "ef0a0832-1343-48ff-b514-0200092101b1" (UID: "ef0a0832-1343-48ff-b514-0200092101b1"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.201660    2363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0a0832-1343-48ff-b514-0200092101b1-script" (OuterVolumeSpecName: "script") pod "ef0a0832-1343-48ff-b514-0200092101b1" (UID: "ef0a0832-1343-48ff-b514-0200092101b1"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.206124    2363 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0a0832-1343-48ff-b514-0200092101b1-kube-api-access-nq6w9" (OuterVolumeSpecName: "kube-api-access-nq6w9") pod "ef0a0832-1343-48ff-b514-0200092101b1" (UID: "ef0a0832-1343-48ff-b514-0200092101b1"). InnerVolumeSpecName "kube-api-access-nq6w9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.302246    2363 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nq6w9\" (UniqueName: \"kubernetes.io/projected/ef0a0832-1343-48ff-b514-0200092101b1-kube-api-access-nq6w9\") on node \"addons-742639\" DevicePath \"\""
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.302284    2363 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/ef0a0832-1343-48ff-b514-0200092101b1-data\") on node \"addons-742639\" DevicePath \"\""
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.302295    2363 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ef0a0832-1343-48ff-b514-0200092101b1-gcp-creds\") on node \"addons-742639\" DevicePath \"\""
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.302305    2363 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/ef0a0832-1343-48ff-b514-0200092101b1-script\") on node \"addons-742639\" DevicePath \"\""
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.385633    2363 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195b1392-2aad-40ff-a44b-0641056727a1" path="/var/lib/kubelet/pods/195b1392-2aad-40ff-a44b-0641056727a1/volumes"
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.386060    2363 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4752523a-ac3b-4bb6-8199-9fb816d49c87" path="/var/lib/kubelet/pods/4752523a-ac3b-4bb6-8199-9fb816d49c87/volumes"
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.386746    2363 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab07daf-f631-41cc-a6ef-6ac8881ebe78" path="/var/lib/kubelet/pods/4ab07daf-f631-41cc-a6ef-6ac8881ebe78/volumes"
	Aug 31 22:19:46 addons-742639 kubelet[2363]: I0831 22:19:46.387231    2363 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0a0832-1343-48ff-b514-0200092101b1" path="/var/lib/kubelet/pods/ef0a0832-1343-48ff-b514-0200092101b1/volumes"
	
	
	==> storage-provisioner [08d38a504440] <==
	I0831 22:07:09.311171       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0831 22:07:09.321956       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0831 22:07:09.322006       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0831 22:07:09.337950       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0831 22:07:09.338371       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"705c67a7-99ca-493a-a378-9e5c60c5a067", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-742639_115acfbf-a758-447d-8bd2-a90bea70ea0a became leader
	I0831 22:07:09.338474       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-742639_115acfbf-a758-447d-8bd2-a90bea70ea0a!
	I0831 22:07:09.438840       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-742639_115acfbf-a758-447d-8bd2-a90bea70ea0a!
	

                                                
                                                
-- /stdout --
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-742639 -n addons-742639
helpers_test.go:262: (dbg) Run:  kubectl --context addons-742639 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:273: non-running pods: busybox test-local-path ingress-nginx-admission-create-rs7wx ingress-nginx-admission-patch-42nm7
helpers_test.go:275: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:278: (dbg) Run:  kubectl --context addons-742639 describe pod busybox test-local-path ingress-nginx-admission-create-rs7wx ingress-nginx-admission-patch-42nm7
helpers_test.go:278: (dbg) Non-zero exit: kubectl --context addons-742639 describe pod busybox test-local-path ingress-nginx-admission-create-rs7wx ingress-nginx-admission-patch-42nm7: exit status 1 (126.88478ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-742639/192.168.49.2
	Start Time:       Sat, 31 Aug 2024 22:10:29 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-prb6w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-prb6w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-742639
	  Normal   Pulling    7m55s (x4 over 9m17s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m55s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m55s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m27s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m16s (x20 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-742639/192.168.49.2
	Start Time:       Sat, 31 Aug 2024 22:19:47 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lwhgt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-lwhgt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  0s    default-scheduler  Successfully assigned default/test-local-path to addons-742639
	  Normal  Pulling    0s    kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rs7wx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-42nm7" not found

                                                
                                                
** /stderr **
helpers_test.go:280: kubectl --context addons-742639 describe pod busybox test-local-path ingress-nginx-admission-create-rs7wx ingress-nginx-admission-patch-42nm7: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.12s)

                                                
                                    

Test pass (328/353)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.6
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.4
9 TestDownloadOnly/v1.20.0/DeleteAll 0.25
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.31.0/json-events 5.16
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.21
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.84
22 TestOffline 93.7
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 227.79
29 TestAddons/serial/Volcano 40.34
31 TestAddons/serial/GCPAuth/Namespaces 0.18
34 TestAddons/parallel/Ingress 19.38
35 TestAddons/parallel/InspektorGadget 11.73
36 TestAddons/parallel/MetricsServer 6.72
39 TestAddons/parallel/CSI 52.74
40 TestAddons/parallel/Headlamp 15.64
41 TestAddons/parallel/CloudSpanner 6.57
42 TestAddons/parallel/LocalPath 51.62
43 TestAddons/parallel/NvidiaDevicePlugin 6.5
44 TestAddons/parallel/Yakd 10.72
45 TestAddons/StoppedEnableDisable 11.36
46 TestCertOptions 34.96
47 TestCertExpiration 254.55
48 TestDockerFlags 46.31
49 TestForceSystemdFlag 36.82
50 TestForceSystemdEnv 42.8
56 TestErrorSpam/setup 36.65
57 TestErrorSpam/start 0.71
58 TestErrorSpam/status 1.14
59 TestErrorSpam/pause 1.34
60 TestErrorSpam/unpause 1.54
61 TestErrorSpam/stop 10.99
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 74.11
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 31.7
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.11
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.47
73 TestFunctional/serial/CacheCmd/cache/add_local 1
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.13
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 42.61
82 TestFunctional/serial/ComponentHealth 0.09
83 TestFunctional/serial/LogsCmd 1.22
84 TestFunctional/serial/LogsFileCmd 1.23
85 TestFunctional/serial/InvalidService 4.42
87 TestFunctional/parallel/ConfigCmd 0.42
88 TestFunctional/parallel/DashboardCmd 11.71
89 TestFunctional/parallel/DryRun 0.53
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.04
95 TestFunctional/parallel/ServiceCmdConnect 11.65
96 TestFunctional/parallel/AddonsCmd 0.19
97 TestFunctional/parallel/PersistentVolumeClaim 26.91
99 TestFunctional/parallel/SSHCmd 0.69
100 TestFunctional/parallel/CpCmd 2.03
102 TestFunctional/parallel/FileSync 0.37
103 TestFunctional/parallel/CertSync 2.09
107 TestFunctional/parallel/NodeLabels 0.12
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
111 TestFunctional/parallel/License 0.23
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
125 TestFunctional/parallel/ProfileCmd/profile_list 0.37
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
127 TestFunctional/parallel/MountCmd/any-port 9.26
128 TestFunctional/parallel/ServiceCmd/List 0.61
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
131 TestFunctional/parallel/ServiceCmd/Format 0.46
132 TestFunctional/parallel/ServiceCmd/URL 0.37
133 TestFunctional/parallel/MountCmd/specific-port 2.33
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.73
135 TestFunctional/parallel/Version/short 0.1
136 TestFunctional/parallel/Version/components 1.16
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.53
142 TestFunctional/parallel/ImageCommands/Setup 0.8
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.05
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.78
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.3
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
149 TestFunctional/parallel/DockerEnv/bash 1.3
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
151 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
152 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
153 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 133.14
161 TestMultiControlPlane/serial/DeployApp 41.73
162 TestMultiControlPlane/serial/PingHostFromPods 1.65
163 TestMultiControlPlane/serial/AddWorkerNode 25.36
164 TestMultiControlPlane/serial/NodeLabels 0.12
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.76
166 TestMultiControlPlane/serial/CopyFile 19.83
167 TestMultiControlPlane/serial/StopSecondaryNode 11.77
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
169 TestMultiControlPlane/serial/RestartSecondaryNode 39.95
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.14
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 266.9
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.55
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
174 TestMultiControlPlane/serial/StopCluster 33.01
175 TestMultiControlPlane/serial/RestartCluster 171.09
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.59
177 TestMultiControlPlane/serial/AddSecondaryNode 45.65
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.78
181 TestImageBuild/serial/Setup 30.86
182 TestImageBuild/serial/NormalBuild 1.74
183 TestImageBuild/serial/BuildWithBuildArg 1.03
184 TestImageBuild/serial/BuildWithDockerIgnore 0.93
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.05
189 TestJSONOutput/start/Command 42.4
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.6
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.51
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.85
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.21
214 TestKicCustomNetwork/create_custom_network 37.62
215 TestKicCustomNetwork/use_default_bridge_network 36.52
216 TestKicExistingNetwork 32.92
217 TestKicCustomSubnet 36.74
218 TestKicStaticIP 35.61
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 73.22
223 TestMountStart/serial/StartWithMountFirst 8.15
224 TestMountStart/serial/VerifyMountFirst 0.28
225 TestMountStart/serial/StartWithMountSecond 7.82
226 TestMountStart/serial/VerifyMountSecond 0.25
227 TestMountStart/serial/DeleteFirst 1.46
228 TestMountStart/serial/VerifyMountPostDelete 0.26
229 TestMountStart/serial/Stop 1.22
230 TestMountStart/serial/RestartStopped 8.66
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestContainerIPsMultiNetwork/serial/CreateExtnet 0.08
235 TestContainerIPsMultiNetwork/serial/FreshStart 39.12
236 TestContainerIPsMultiNetwork/serial/ConnectExtnet 0.11
237 TestContainerIPsMultiNetwork/serial/Stop 10.99
238 TestContainerIPsMultiNetwork/serial/VerifyStatus 0.07
239 TestContainerIPsMultiNetwork/serial/Start 49.7
240 TestContainerIPsMultiNetwork/serial/VerifyNetworks 0.02
241 TestContainerIPsMultiNetwork/serial/Delete 2.32
242 TestContainerIPsMultiNetwork/serial/DeleteExtnet 0.11
243 TestContainerIPsMultiNetwork/serial/VerifyDeletedResources 0.11
246 TestMultiNode/serial/FreshStart2Nodes 85.28
247 TestMultiNode/serial/DeployApp2Nodes 44.31
248 TestMultiNode/serial/PingHostFrom2Pods 1.12
249 TestMultiNode/serial/AddNode 19.33
250 TestMultiNode/serial/MultiNodeLabels 0.11
251 TestMultiNode/serial/ProfileList 0.38
252 TestMultiNode/serial/CopyFile 10.36
253 TestMultiNode/serial/StopNode 2.25
254 TestMultiNode/serial/StartAfterStop 11.49
255 TestMultiNode/serial/RestartKeepsNodes 106.38
256 TestMultiNode/serial/DeleteNode 6.4
257 TestMultiNode/serial/StopMultiNode 21.72
258 TestMultiNode/serial/RestartMultiNode 54.9
259 TestMultiNode/serial/ValidateNameConflict 34.89
264 TestPreload 140.18
266 TestScheduledStopUnix 105.05
267 TestSkaffold 119.4
269 TestInsufficientStorage 12.06
270 TestRunningBinaryUpgrade 153.17
272 TestKubernetesUpgrade 372.64
273 TestMissingContainerUpgrade 133.55
285 TestStoppedBinaryUpgrade/Setup 0.64
286 TestStoppedBinaryUpgrade/Upgrade 77.25
287 TestStoppedBinaryUpgrade/MinikubeLogs 1.8
289 TestPause/serial/Start 76.86
290 TestPause/serial/SecondStartNoReconfiguration 29.85
291 TestPause/serial/Pause 0.72
292 TestPause/serial/VerifyStatus 0.41
293 TestPause/serial/Unpause 0.54
294 TestPause/serial/PauseAgain 0.75
295 TestPause/serial/DeletePaused 2.61
296 TestPause/serial/VerifyDeletedResources 0.35
305 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
306 TestNoKubernetes/serial/StartWithK8s 40.08
307 TestNoKubernetes/serial/StartWithStopK8s 19.45
308 TestNoKubernetes/serial/Start 11.99
309 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
310 TestNoKubernetes/serial/ProfileList 0.84
311 TestNoKubernetes/serial/Stop 1.29
312 TestNetworkPlugins/group/auto/Start 81.7
313 TestNoKubernetes/serial/StartNoArgs 8.98
314 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
315 TestNetworkPlugins/group/kindnet/Start 70.67
316 TestNetworkPlugins/group/auto/KubeletFlags 0.31
317 TestNetworkPlugins/group/auto/NetCatPod 9.31
318 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
320 TestNetworkPlugins/group/kindnet/NetCatPod 12.29
321 TestNetworkPlugins/group/auto/DNS 0.28
322 TestNetworkPlugins/group/auto/Localhost 0.23
323 TestNetworkPlugins/group/auto/HairPin 0.24
324 TestNetworkPlugins/group/kindnet/DNS 0.29
325 TestNetworkPlugins/group/kindnet/Localhost 0.28
326 TestNetworkPlugins/group/kindnet/HairPin 0.23
327 TestNetworkPlugins/group/calico/Start 79.8
328 TestNetworkPlugins/group/custom-flannel/Start 67.55
329 TestNetworkPlugins/group/calico/ControllerPod 6.01
330 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.45
331 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.28
332 TestNetworkPlugins/group/calico/KubeletFlags 0.35
333 TestNetworkPlugins/group/calico/NetCatPod 10.27
334 TestNetworkPlugins/group/custom-flannel/DNS 0.21
335 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
336 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
337 TestNetworkPlugins/group/calico/DNS 0.23
338 TestNetworkPlugins/group/calico/Localhost 0.2
339 TestNetworkPlugins/group/calico/HairPin 0.21
340 TestNetworkPlugins/group/false/Start 85.57
341 TestNetworkPlugins/group/enable-default-cni/Start 53.68
342 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
343 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
344 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
345 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
346 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
347 TestNetworkPlugins/group/false/KubeletFlags 0.36
348 TestNetworkPlugins/group/false/NetCatPod 11.33
349 TestNetworkPlugins/group/flannel/Start 62.12
350 TestNetworkPlugins/group/false/DNS 0.22
351 TestNetworkPlugins/group/false/Localhost 0.22
352 TestNetworkPlugins/group/false/HairPin 0.22
353 TestNetworkPlugins/group/bridge/Start 55.1
354 TestNetworkPlugins/group/flannel/ControllerPod 6.01
355 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
356 TestNetworkPlugins/group/flannel/NetCatPod 12.34
357 TestNetworkPlugins/group/flannel/DNS 0.21
358 TestNetworkPlugins/group/flannel/Localhost 0.18
359 TestNetworkPlugins/group/flannel/HairPin 0.18
360 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
361 TestNetworkPlugins/group/bridge/NetCatPod 11.41
362 TestNetworkPlugins/group/bridge/DNS 0.29
363 TestNetworkPlugins/group/bridge/Localhost 0.34
364 TestNetworkPlugins/group/bridge/HairPin 0.27
365 TestNetworkPlugins/group/kubenet/Start 58.37
367 TestStartStop/group/old-k8s-version/serial/FirstStart 181.79
368 TestNetworkPlugins/group/kubenet/KubeletFlags 0.3
369 TestNetworkPlugins/group/kubenet/NetCatPod 11.28
370 TestNetworkPlugins/group/kubenet/DNS 0.36
371 TestNetworkPlugins/group/kubenet/Localhost 0.28
372 TestNetworkPlugins/group/kubenet/HairPin 0.26
374 TestStartStop/group/no-preload/serial/FirstStart 51.1
375 TestStartStop/group/no-preload/serial/DeployApp 8.35
376 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
377 TestStartStop/group/no-preload/serial/Stop 11.1
378 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
379 TestStartStop/group/no-preload/serial/SecondStart 278.49
380 TestStartStop/group/old-k8s-version/serial/DeployApp 10.61
381 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.06
382 TestStartStop/group/old-k8s-version/serial/Stop 11.14
383 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
384 TestStartStop/group/old-k8s-version/serial/SecondStart 134.88
385 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
386 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
387 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
388 TestStartStop/group/old-k8s-version/serial/Pause 2.79
390 TestStartStop/group/embed-certs/serial/FirstStart 74.98
391 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
392 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.16
393 TestStartStop/group/embed-certs/serial/DeployApp 10.41
394 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
395 TestStartStop/group/no-preload/serial/Pause 2.92
397 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 75.67
398 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.35
399 TestStartStop/group/embed-certs/serial/Stop 11.06
400 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
401 TestStartStop/group/embed-certs/serial/SecondStart 272.59
402 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.38
403 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
404 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.94
405 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
406 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 281.66
407 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
408 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
409 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
410 TestStartStop/group/embed-certs/serial/Pause 3
412 TestStartStop/group/newest-cni/serial/FirstStart 37.16
413 TestStartStop/group/newest-cni/serial/DeployApp 0
414 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
415 TestStartStop/group/newest-cni/serial/Stop 9.11
416 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
417 TestStartStop/group/newest-cni/serial/SecondStart 19.94
418 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
419 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
420 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.4
421 TestStartStop/group/newest-cni/serial/Pause 4
422 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
423 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
424 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
425 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.8
x
+
TestDownloadOnly/v1.20.0/json-events (10.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-217946 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-217946 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (10.603768629s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-217946
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-217946: exit status 85 (401.258467ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-217946 | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |          |
	|         | -p download-only-217946        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:05:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:05:42.323222    7602 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:05:42.323518    7602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:05:42.323593    7602 out.go:358] Setting ErrFile to fd 2...
	I0831 22:05:42.323623    7602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:05:42.323902    7602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
	W0831 22:05:42.324065    7602 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18943-2279/.minikube/config/config.json: open /home/jenkins/minikube-integration/18943-2279/.minikube/config/config.json: no such file or directory
	I0831 22:05:42.324556    7602 out.go:352] Setting JSON to true
	I0831 22:05:42.325484    7602 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2888,"bootTime":1725139055,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0831 22:05:42.325616    7602 start.go:139] virtualization:  
	I0831 22:05:42.328644    7602 out.go:97] [download-only-217946] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0831 22:05:42.328860    7602 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/18943-2279/.minikube/cache/preloaded-tarball: no such file or directory
	I0831 22:05:42.328957    7602 notify.go:220] Checking for updates...
	I0831 22:05:42.331937    7602 out.go:169] MINIKUBE_LOCATION=18943
	I0831 22:05:42.334419    7602 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:05:42.336017    7602 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-2279/kubeconfig
	I0831 22:05:42.337692    7602 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-2279/.minikube
	I0831 22:05:42.339297    7602 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0831 22:05:42.342468    7602 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0831 22:05:42.342785    7602 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:05:42.365568    7602 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:05:42.365680    7602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:05:42.686862    7602 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-31 22:05:42.677220135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:05:42.686969    7602 docker.go:307] overlay module found
	I0831 22:05:42.688751    7602 out.go:97] Using the docker driver based on user configuration
	I0831 22:05:42.688781    7602 start.go:297] selected driver: docker
	I0831 22:05:42.688789    7602 start.go:901] validating driver "docker" against <nil>
	I0831 22:05:42.688900    7602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:05:42.749803    7602 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-31 22:05:42.741015485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:05:42.749962    7602 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:05:42.750248    7602 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0831 22:05:42.750431    7602 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 22:05:42.752307    7602 out.go:169] Using Docker driver with root privileges
	I0831 22:05:42.753871    7602 cni.go:84] Creating CNI manager for ""
	I0831 22:05:42.753891    7602 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0831 22:05:42.753963    7602 start.go:340] cluster config:
	{Name:download-only-217946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-217946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:05:42.755681    7602 out.go:97] Starting "download-only-217946" primary control-plane node in "download-only-217946" cluster
	I0831 22:05:42.755699    7602 cache.go:121] Beginning downloading kic base image for docker with docker
	I0831 22:05:42.757336    7602 out.go:97] Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:05:42.757360    7602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 22:05:42.757495    7602 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:05:42.772448    7602 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:05:42.772639    7602 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:05:42.772744    7602 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:05:42.815000    7602 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0831 22:05:42.815037    7602 cache.go:56] Caching tarball of preloaded images
	I0831 22:05:42.815197    7602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 22:05:42.816963    7602 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0831 22:05:42.816984    7602 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0831 22:05:42.902900    7602 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/18943-2279/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0831 22:05:46.902621    7602 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0831 22:05:46.902760    7602 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/18943-2279/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0831 22:05:48.033468    7602 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0831 22:05:48.033990    7602 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/download-only-217946/config.json ...
	I0831 22:05:48.034053    7602 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/download-only-217946/config.json: {Name:mke9193e4eeeea127b3de0e686880b16401d793e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:05:48.034291    7602 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0831 22:05:48.034830    7602 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18943-2279/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-217946 host does not exist
	  To start a cluster, run: "minikube start -p download-only-217946"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-217946
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (5.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-613931 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-613931 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.156523524s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (5.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-613931
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-613931: exit status 85 (70.164521ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-217946 | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |                     |
	|         | -p download-only-217946        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| delete  | -p download-only-217946        | download-only-217946 | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC | 31 Aug 24 22:05 UTC |
	| start   | -o=json --download-only        | download-only-613931 | jenkins | v1.33.1 | 31 Aug 24 22:05 UTC |                     |
	|         | -p download-only-613931        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/31 22:05:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0831 22:05:53.726096    7803 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:05:53.726256    7803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:05:53.726286    7803 out.go:358] Setting ErrFile to fd 2...
	I0831 22:05:53.726305    7803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:05:53.726552    7803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
	I0831 22:05:53.726960    7803 out.go:352] Setting JSON to true
	I0831 22:05:53.727769    7803 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":2899,"bootTime":1725139055,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0831 22:05:53.727869    7803 start.go:139] virtualization:  
	I0831 22:05:53.751070    7803 out.go:97] [download-only-613931] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 22:05:53.751331    7803 notify.go:220] Checking for updates...
	I0831 22:05:53.767730    7803 out.go:169] MINIKUBE_LOCATION=18943
	I0831 22:05:53.788925    7803 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:05:53.806638    7803 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-2279/kubeconfig
	I0831 22:05:53.834778    7803 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-2279/.minikube
	I0831 22:05:53.856341    7803 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0831 22:05:53.899684    7803 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0831 22:05:53.900060    7803 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:05:53.926457    7803 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:05:53.926568    7803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:05:53.985354    7803 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:05:53.975928738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:05:53.985460    7803 docker.go:307] overlay module found
	I0831 22:05:53.987362    7803 out.go:97] Using the docker driver based on user configuration
	I0831 22:05:53.987386    7803 start.go:297] selected driver: docker
	I0831 22:05:53.987398    7803 start.go:901] validating driver "docker" against <nil>
	I0831 22:05:53.987494    7803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:05:54.045405    7803 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-31 22:05:54.033202907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:05:54.045571    7803 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0831 22:05:54.045834    7803 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0831 22:05:54.045989    7803 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0831 22:05:54.048141    7803 out.go:169] Using Docker driver with root privileges
	I0831 22:05:54.049849    7803 cni.go:84] Creating CNI manager for ""
	I0831 22:05:54.049897    7803 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0831 22:05:54.049909    7803 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0831 22:05:54.049997    7803 start.go:340] cluster config:
	{Name:download-only-613931 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-613931 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:05:54.051793    7803 out.go:97] Starting "download-only-613931" primary control-plane node in "download-only-613931" cluster
	I0831 22:05:54.051827    7803 cache.go:121] Beginning downloading kic base image for docker with docker
	I0831 22:05:54.053668    7803 out.go:97] Pulling base image v0.0.44-1724862063-19530 ...
	I0831 22:05:54.053713    7803 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 22:05:54.053916    7803 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0831 22:05:54.069331    7803 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0831 22:05:54.069485    7803 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0831 22:05:54.069512    7803 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0831 22:05:54.069520    7803 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0831 22:05:54.069529    7803 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0831 22:05:54.107261    7803 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 22:05:54.107292    7803 cache.go:56] Caching tarball of preloaded images
	I0831 22:05:54.107449    7803 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 22:05:54.109233    7803 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0831 22:05:54.109258    7803 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0831 22:05:54.192534    7803 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /home/jenkins/minikube-integration/18943-2279/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0831 22:05:57.227692    7803 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0831 22:05:57.227880    7803 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/18943-2279/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0831 22:05:58.160937    7803 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0831 22:05:58.161291    7803 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/download-only-613931/config.json ...
	I0831 22:05:58.161323    7803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/download-only-613931/config.json: {Name:mkeecc539f3ed35545dac0938349d8c4079a9be8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0831 22:05:58.161493    7803 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0831 22:05:58.161648    7803 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18943-2279/.minikube/cache/linux/arm64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-613931 host does not exist
	  To start a cluster, run: "minikube start -p download-only-613931"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-613931
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-864794 --alsologtostderr --binary-mirror http://127.0.0.1:43541 --driver=docker  --container-runtime=docker
helpers_test.go:176: Cleaning up "binary-mirror-864794" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-864794
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (93.7s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-181321 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-181321 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m31.196045717s)
helpers_test.go:176: Cleaning up "offline-docker-181321" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-181321
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-181321: (2.507594594s)
--- PASS: TestOffline (93.70s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-742639
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-742639: exit status 85 (71.535019ms)

                                                
                                                
-- stdout --
	* Profile "addons-742639" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-742639"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-742639
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-742639: exit status 85 (84.874536ms)

                                                
                                                
-- stdout --
	* Profile "addons-742639" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-742639"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (227.79s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-742639 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-742639 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m47.792196395s)
--- PASS: TestAddons/Setup (227.79s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 49.453129ms
addons_test.go:897: volcano-scheduler stabilized in 50.118793ms
addons_test.go:905: volcano-admission stabilized in 50.150029ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-scheduler-576bc46687-xw9pn" [f1d26ce7-75c9-4feb-a36a-cf015386e749] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.006145174s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-admission-77d7d48b68-c6gfv" [c3ca5027-7adf-4406-8e4e-58c701c0968b] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00373919s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:345: "volcano-controllers-56675bb4d5-cg9tb" [0a9f4848-0934-4962-b518-d296c4d2a84f] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0036607s
addons_test.go:932: (dbg) Run:  kubectl --context addons-742639 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-742639 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-742639 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:345: "test-job-nginx-0" [7ca6a634-5d6b-4f1f-a57d-6bc882679150] Pending
helpers_test.go:345: "test-job-nginx-0" [7ca6a634-5d6b-4f1f-a57d-6bc882679150] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "test-job-nginx-0" [7ca6a634-5d6b-4f1f-a57d-6bc882679150] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004111212s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-742639 addons disable volcano --alsologtostderr -v=1: (10.59985824s)
--- PASS: TestAddons/serial/Volcano (40.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-742639 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-742639 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-742639 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-742639 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-742639 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:345: "nginx" [e7386bce-d230-4490-8b4c-f925e4d793ec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "nginx" [e7386bce-d230-4490-8b4c-f925e4d793ec] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004468124s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-742639 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-742639 addons disable ingress-dns --alsologtostderr -v=1: (1.019879173s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-742639 addons disable ingress --alsologtostderr -v=1: (7.666765265s)
--- PASS: TestAddons/parallel/Ingress (19.38s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:345: "gadget-97mp2" [a47007b6-1e75-494c-85dd-058d5cab95d7] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00345198s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-742639
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-742639: (5.720310342s)
--- PASS: TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 5.094297ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:345: "metrics-server-84c5f94fbc-v5wlx" [6c6e14f0-a408-480e-9c81-b11d7f1f96f0] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00345499s
addons_test.go:417: (dbg) Run:  kubectl --context addons-742639 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.72s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.981625ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-742639 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-742639 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:345: "task-pv-pod" [09e40d8e-30d2-4d9a-8372-2647084263f0] Pending
helpers_test.go:345: "task-pv-pod" [09e40d8e-30d2-4d9a-8372-2647084263f0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod" [09e40d8e-30d2-4d9a-8372-2647084263f0] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003680793s
addons_test.go:590: (dbg) Run:  kubectl --context addons-742639 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:420: (dbg) Run:  kubectl --context addons-742639 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:420: (dbg) Run:  kubectl --context addons-742639 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-742639 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-742639 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-742639 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-742639 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:345: "task-pv-pod-restore" [e9718d29-3934-4681-b1cb-b5e74ed6d3ff] Pending
helpers_test.go:345: "task-pv-pod-restore" [e9718d29-3934-4681-b1cb-b5e74ed6d3ff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:345: "task-pv-pod-restore" [e9718d29-3934-4681-b1cb-b5e74ed6d3ff] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003500944s
addons_test.go:632: (dbg) Run:  kubectl --context addons-742639 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-742639 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-742639 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-742639 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.709642872s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.74s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-742639 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:345: "headlamp-57fb76fcdb-qwz9j" [719ecfbf-a80e-4912-955a-7499a20cd4fb] Pending
helpers_test.go:345: "headlamp-57fb76fcdb-qwz9j" [719ecfbf-a80e-4912-955a-7499a20cd4fb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:345: "headlamp-57fb76fcdb-qwz9j" [719ecfbf-a80e-4912-955a-7499a20cd4fb] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004643724s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-742639 addons disable headlamp --alsologtostderr -v=1: (5.683092753s)
--- PASS: TestAddons/parallel/Headlamp (15.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:345: "cloud-spanner-emulator-769b77f747-95jzp" [7fd274ce-f8f2-4e50-8572-7415ac53cf81] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00331155s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-742639
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.62s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-742639 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-742639 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:395: (dbg) Run:  kubectl --context addons-742639 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:345: "test-local-path" [9e8baa69-d08a-4e66-b98b-3cf641be0666] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "test-local-path" [9e8baa69-d08a-4e66-b98b-3cf641be0666] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:345: "test-local-path" [9e8baa69-d08a-4e66-b98b-3cf641be0666] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003496705s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-742639 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 ssh "cat /opt/local-path-provisioner/pvc-77e56be0-b431-4714-8471-5267667c9b66_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-742639 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-742639 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-742639 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.354924094s)
--- PASS: TestAddons/parallel/LocalPath (51.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:345: "nvidia-device-plugin-daemonset-gclmc" [5cbce86e-0fc2-4aed-80c5-66b21a417eb6] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00438467s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-742639
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:345: "yakd-dashboard-67d98fc6b-62rjn" [7c55febc-302b-4ad7-98cf-1eca189a2e1b] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004409922s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-742639 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-742639 addons disable yakd --alsologtostderr -v=1: (5.712063856s)
--- PASS: TestAddons/parallel/Yakd (10.72s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-742639
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-742639: (11.111842723s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-742639
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-742639
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-742639
--- PASS: TestAddons/StoppedEnableDisable (11.36s)

                                                
                                    
x
+
TestCertOptions (34.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-724250 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-724250 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (31.845785773s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-724250 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-724250 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-724250 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-724250" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-724250
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-724250: (2.452000845s)
--- PASS: TestCertOptions (34.96s)

                                                
                                    
x
+
TestCertExpiration (254.55s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-059493 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-059493 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (40.378192426s)
E0831 23:02:42.431259    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-059493 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0831 23:05:28.001300    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-059493 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (31.189373661s)
helpers_test.go:176: Cleaning up "cert-expiration-059493" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-059493
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-059493: (2.984378505s)
--- PASS: TestCertExpiration (254.55s)

                                                
                                    
x
+
TestDockerFlags (46.31s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-402392 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-402392 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.141544923s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-402392 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-402392 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-402392" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-402392
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-402392: (2.363181495s)
--- PASS: TestDockerFlags (46.31s)

                                                
                                    
x
+
TestForceSystemdFlag (36.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-354812 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-354812 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.371691358s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-354812 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-flag-354812" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-354812
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-354812: (2.08433923s)
--- PASS: TestForceSystemdFlag (36.82s)

                                                
                                    
x
+
TestForceSystemdEnv (42.8s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-056458 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-056458 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.742968796s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-056458 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-056458" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-056458
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-056458: (2.555254625s)
--- PASS: TestForceSystemdEnv (42.80s)

                                                
                                    
x
+
TestErrorSpam/setup (36.65s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-620901 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-620901 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-620901 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-620901 --driver=docker  --container-runtime=docker: (36.650534519s)
--- PASS: TestErrorSpam/setup (36.65s)

                                                
                                    
x
+
TestErrorSpam/start (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 start --dry-run
--- PASS: TestErrorSpam/start (0.71s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 pause
--- PASS: TestErrorSpam/pause (1.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (10.99s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 stop: (10.801500001s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-620901 --log_dir /tmp/nospam-620901 stop
--- PASS: TestErrorSpam/stop (10.99s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/18943-2279/.minikube/files/etc/test/nested/copy/7597/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (74.11s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-422183 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-422183 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m14.107909045s)
--- PASS: TestFunctional/serial/StartWithProxy (74.11s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (31.7s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-422183 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-422183 --alsologtostderr -v=8: (31.694676144s)
functional_test.go:663: soft start took 31.69943278s for "functional-422183" cluster.
--- PASS: TestFunctional/serial/SoftStart (31.70s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-422183 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-422183 cache add registry.k8s.io/pause:3.1: (1.130772451s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-422183 cache add registry.k8s.io/pause:3.3: (1.23792664s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-422183 cache add registry.k8s.io/pause:latest: (1.10456679s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-422183 /tmp/TestFunctionalserialCacheCmdcacheadd_local355776879/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 cache add minikube-local-cache-test:functional-422183
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 cache delete minikube-local-cache-test:functional-422183
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-422183
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422183 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (294.515373ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 kubectl -- --context functional-422183 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-422183 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.61s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-422183 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-422183 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.614113567s)
functional_test.go:761: restart took 42.614255957s for "functional-422183" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.61s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-422183 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-422183 logs: (1.216394106s)
--- PASS: TestFunctional/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 logs --file /tmp/TestFunctionalserialLogsFileCmd2311406949/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-422183 logs --file /tmp/TestFunctionalserialLogsFileCmd2311406949/001/logs.txt: (1.231422404s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.42s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-422183 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-422183
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-422183: exit status 115 (645.930353ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31916 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-422183 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.42s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422183 config get cpus: exit status 14 (72.607557ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422183 config get cpus: exit status 14 (72.951455ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-422183 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-422183 --alsologtostderr -v=1] ...
helpers_test.go:509: unable to kill pid 48131: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.71s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-422183 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-422183 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (241.659306ms)

                                                
                                                
-- stdout --
	* [functional-422183] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-2279/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-2279/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:25:12.019075   47810 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:25:12.019425   47810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:25:12.019448   47810 out.go:358] Setting ErrFile to fd 2...
	I0831 22:25:12.019473   47810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:25:12.019721   47810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
	I0831 22:25:12.020187   47810 out.go:352] Setting JSON to false
	I0831 22:25:12.021138   47810 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4057,"bootTime":1725139055,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0831 22:25:12.021238   47810 start.go:139] virtualization:  
	I0831 22:25:12.024672   47810 out.go:177] * [functional-422183] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0831 22:25:12.026991   47810 notify.go:220] Checking for updates...
	I0831 22:25:12.026962   47810 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:25:12.031562   47810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:25:12.035468   47810 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-2279/kubeconfig
	I0831 22:25:12.039726   47810 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-2279/.minikube
	I0831 22:25:12.043927   47810 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 22:25:12.046408   47810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:25:12.049419   47810 config.go:182] Loaded profile config "functional-422183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:25:12.050067   47810 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:25:12.091691   47810 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:25:12.091867   47810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:25:12.182168   47810 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-31 22:25:12.16910602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:25:12.182278   47810 docker.go:307] overlay module found
	I0831 22:25:12.185980   47810 out.go:177] * Using the docker driver based on existing profile
	I0831 22:25:12.188900   47810 start.go:297] selected driver: docker
	I0831 22:25:12.188922   47810 start.go:901] validating driver "docker" against &{Name:functional-422183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-422183 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:25:12.189052   47810 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:25:12.191748   47810 out.go:201] 
	W0831 22:25:12.193781   47810 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0831 22:25:12.195487   47810 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-422183 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-422183 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-422183 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (197.604765ms)

                                                
                                                
-- stdout --
	* [functional-422183] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-2279/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-2279/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:25:11.808314   47762 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:25:11.808511   47762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:25:11.808540   47762 out.go:358] Setting ErrFile to fd 2...
	I0831 22:25:11.808560   47762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:25:11.808931   47762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
	I0831 22:25:11.809352   47762 out.go:352] Setting JSON to false
	I0831 22:25:11.810488   47762 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4057,"bootTime":1725139055,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0831 22:25:11.810596   47762 start.go:139] virtualization:  
	I0831 22:25:11.813729   47762 out.go:177] * [functional-422183] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0831 22:25:11.817573   47762 notify.go:220] Checking for updates...
	I0831 22:25:11.820250   47762 out.go:177]   - MINIKUBE_LOCATION=18943
	I0831 22:25:11.822655   47762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0831 22:25:11.824918   47762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-2279/kubeconfig
	I0831 22:25:11.827278   47762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-2279/.minikube
	I0831 22:25:11.829439   47762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0831 22:25:11.831234   47762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0831 22:25:11.833337   47762 config.go:182] Loaded profile config "functional-422183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:25:11.834105   47762 driver.go:392] Setting default libvirt URI to qemu:///system
	I0831 22:25:11.861277   47762 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0831 22:25:11.861398   47762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:25:11.941591   47762 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-31 22:25:11.931032461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:25:11.941710   47762 docker.go:307] overlay module found
	I0831 22:25:11.943814   47762 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0831 22:25:11.945624   47762 start.go:297] selected driver: docker
	I0831 22:25:11.945642   47762 start.go:901] validating driver "docker" against &{Name:functional-422183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-422183 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0831 22:25:11.945870   47762 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0831 22:25:11.948167   47762 out.go:201] 
	W0831 22:25:11.949893   47762 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0831 22:25:11.951599   47762 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-422183 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
E0831 22:24:49.483246    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1635: (dbg) Run:  kubectl --context functional-422183 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:345: "hello-node-connect-65d86f57f4-xsgdt" [bde737c1-7959-4925-8ff7-37e035db15aa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0831 22:24:50.124890    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:51.406240    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "hello-node-connect-65d86f57f4-xsgdt" [bde737c1-7959-4925-8ff7-37e035db15aa] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004059679s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31145
functional_test.go:1675: http://192.168.49.2:31145: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-xsgdt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31145
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:345: "storage-provisioner" [da18ed1f-247b-4458-b7e0-7f7709d309ae] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00526377s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-422183 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-422183 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-422183 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-422183 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [e18fb2f3-c1be-4839-a66b-c42cfeaadf75] Pending
helpers_test.go:345: "sp-pod" [e18fb2f3-c1be-4839-a66b-c42cfeaadf75] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0831 22:24:48.834949    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:48.842064    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:48.853442    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:48.874946    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:48.916701    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:24:48.998076    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "sp-pod" [e18fb2f3-c1be-4839-a66b-c42cfeaadf75] Running
E0831 22:24:53.967621    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004109735s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-422183 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-422183 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-422183 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:345: "sp-pod" [bb8abcad-c37c-421b-b19f-e4dcd458bfdd] Pending
helpers_test.go:345: "sp-pod" [bb8abcad-c37c-421b-b19f-e4dcd458bfdd] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0831 22:24:59.089624    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "sp-pod" [bb8abcad-c37c-421b-b19f-e4dcd458bfdd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004216423s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-422183 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh -n functional-422183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 cp functional-422183:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3092152978/001/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh -n functional-422183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh -n functional-422183 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7597/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "sudo cat /etc/test/nested/copy/7597/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7597.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "sudo cat /etc/ssl/certs/7597.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7597.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "sudo cat /usr/share/ca-certificates/7597.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75972.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "sudo cat /etc/ssl/certs/75972.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75972.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "sudo cat /usr/share/ca-certificates/75972.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-422183 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422183 ssh "sudo systemctl is-active crio": exit status 1 (381.607627ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-422183 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-422183 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-422183 tunnel --alsologtostderr] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-422183 tunnel --alsologtostderr] ...
helpers_test.go:509: unable to kill pid 45002: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-422183 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-422183 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:345: "nginx-svc" [919d41e2-1052-48d3-b266-2e733c975971] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:345: "nginx-svc" [919d41e2-1052-48d3-b266-2e733c975971] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003439834s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-422183 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.142.251 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-422183 tunnel --alsologtostderr] ...
E0831 22:24:49.160231    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-422183 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-422183 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:345: "hello-node-64b4f8f9ff-v9cv9" [5a240dd4-b6a6-4f14-bc5d-189f67adaebb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:345: "hello-node-64b4f8f9ff-v9cv9" [5a240dd4-b6a6-4f14-bc5d-189f67adaebb] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003673133s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "320.226441ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "53.920025ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "321.398941ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "50.458009ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-422183 /tmp/TestFunctionalparallelMountCmdany-port2241683843/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725143107398402516" to /tmp/TestFunctionalparallelMountCmdany-port2241683843/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725143107398402516" to /tmp/TestFunctionalparallelMountCmdany-port2241683843/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725143107398402516" to /tmp/TestFunctionalparallelMountCmdany-port2241683843/001/test-1725143107398402516
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422183 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (359.006295ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 31 22:25 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 31 22:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 31 22:25 test-1725143107398402516
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh cat /mount-9p/test-1725143107398402516
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-422183 replace --force -f testdata/busybox-mount-test.yaml
E0831 22:25:09.331638    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:345: "busybox-mount" [39d81248-2429-4c9a-bff4-c6f01478bfa3] Pending
helpers_test.go:345: "busybox-mount" [39d81248-2429-4c9a-bff4-c6f01478bfa3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:345: "busybox-mount" [39d81248-2429-4c9a-bff4-c6f01478bfa3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:345: "busybox-mount" [39d81248-2429-4c9a-bff4-c6f01478bfa3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00441182s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-422183 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-422183 /tmp/TestFunctionalparallelMountCmdany-port2241683843/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 service list -o json
functional_test.go:1494: Took "597.782779ms" to run "out/minikube-linux-arm64 -p functional-422183 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31686
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31686
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-422183 /tmp/TestFunctionalparallelMountCmdspecific-port1376088778/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422183 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (507.503037ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-422183 /tmp/TestFunctionalparallelMountCmdspecific-port1376088778/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422183 ssh "sudo umount -f /mount-9p": exit status 1 (331.351583ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-422183 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-422183 /tmp/TestFunctionalparallelMountCmdspecific-port1376088778/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-422183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2228776114/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-422183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2228776114/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-422183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2228776114/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-422183 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-422183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2228776114/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-422183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2228776114/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-422183 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2228776114/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:491: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-422183 version -o=json --components: (1.162718171s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-422183 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-422183
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-422183
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-422183 image ls --format short --alsologtostderr:
I0831 22:25:29.150658   51043 out.go:345] Setting OutFile to fd 1 ...
I0831 22:25:29.150871   51043 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:25:29.150900   51043 out.go:358] Setting ErrFile to fd 2...
I0831 22:25:29.150929   51043 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:25:29.151240   51043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
I0831 22:25:29.151987   51043 config.go:182] Loaded profile config "functional-422183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:25:29.152171   51043 config.go:182] Loaded profile config "functional-422183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:25:29.152793   51043 cli_runner.go:164] Run: docker container inspect functional-422183 --format={{.State.Status}}
I0831 22:25:29.195122   51043 ssh_runner.go:195] Run: systemctl --version
I0831 22:25:29.195191   51043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-422183
I0831 22:25:29.230108   51043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/functional-422183/id_rsa Username:docker}
I0831 22:25:29.332832   51043 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-422183 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kicbase/echo-server               | functional-422183 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| docker.io/library/minikube-local-cache-test | functional-422183 | 93de31457c6a5 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-422183 image ls --format table --alsologtostderr:
I0831 22:25:30.047352   51317 out.go:345] Setting OutFile to fd 1 ...
I0831 22:25:30.047543   51317 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:25:30.047550   51317 out.go:358] Setting ErrFile to fd 2...
I0831 22:25:30.047556   51317 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:25:30.047835   51317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
I0831 22:25:30.048554   51317 config.go:182] Loaded profile config "functional-422183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:25:30.048694   51317 config.go:182] Loaded profile config "functional-422183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:25:30.049194   51317 cli_runner.go:164] Run: docker container inspect functional-422183 --format={{.State.Status}}
I0831 22:25:30.075936   51317 ssh_runner.go:195] Run: systemctl --version
I0831 22:25:30.076003   51317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-422183
I0831 22:25:30.107568   51317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/functional-422183/id_rsa Username:docker}
I0831 22:25:30.217946   51317 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image ls --format json --alsologtostderr
E0831 22:25:29.812924    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-422183 image ls --format json --alsologtostderr:
[{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-422183"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00
b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["regist
ry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"93de31457c6a5ac76d8cff2cfba3c432cb134ae95c0f268a08d6360c761cc267","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-422183"],"size":"30"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"siz
e":"139000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-422183 image ls --format json --alsologtostderr:
I0831 22:25:29.798790   51251 out.go:345] Setting OutFile to fd 1 ...
I0831 22:25:29.799019   51251 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:25:29.799047   51251 out.go:358] Setting ErrFile to fd 2...
I0831 22:25:29.799064   51251 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:25:29.799353   51251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
I0831 22:25:29.800032   51251 config.go:182] Loaded profile config "functional-422183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:25:29.800206   51251 config.go:182] Loaded profile config "functional-422183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:25:29.800730   51251 cli_runner.go:164] Run: docker container inspect functional-422183 --format={{.State.Status}}
I0831 22:25:29.828169   51251 ssh_runner.go:195] Run: systemctl --version
I0831 22:25:29.828232   51251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-422183
I0831 22:25:29.854424   51251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/functional-422183/id_rsa Username:docker}
I0831 22:25:29.947706   51251 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-422183 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-422183
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 93de31457c6a5ac76d8cff2cfba3c432cb134ae95c0f268a08d6360c761cc267
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-422183
size: "30"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-422183 image ls --format yaml --alsologtostderr:
I0831 22:25:29.427883   51128 out.go:345] Setting OutFile to fd 1 ...
I0831 22:25:29.428139   51128 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:25:29.428151   51128 out.go:358] Setting ErrFile to fd 2...
I0831 22:25:29.428157   51128 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:25:29.428442   51128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
I0831 22:25:29.429201   51128 config.go:182] Loaded profile config "functional-422183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:25:29.429367   51128 config.go:182] Loaded profile config "functional-422183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:25:29.429880   51128 cli_runner.go:164] Run: docker container inspect functional-422183 --format={{.State.Status}}
I0831 22:25:29.452405   51128 ssh_runner.go:195] Run: systemctl --version
I0831 22:25:29.452461   51128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-422183
I0831 22:25:29.472844   51128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/functional-422183/id_rsa Username:docker}
I0831 22:25:29.583863   51128 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-422183 ssh pgrep buildkitd: exit status 1 (340.454232ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image build -t localhost/my-image:functional-422183 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-422183 image build -t localhost/my-image:functional-422183 testdata/build --alsologtostderr: (2.944596968s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-422183 image build -t localhost/my-image:functional-422183 testdata/build --alsologtostderr:
I0831 22:25:30.043267   51312 out.go:345] Setting OutFile to fd 1 ...
I0831 22:25:30.043597   51312 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:25:30.043632   51312 out.go:358] Setting ErrFile to fd 2...
I0831 22:25:30.043650   51312 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0831 22:25:30.043996   51312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
I0831 22:25:30.044895   51312 config.go:182] Loaded profile config "functional-422183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:25:30.045810   51312 config.go:182] Loaded profile config "functional-422183": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0831 22:25:30.046389   51312 cli_runner.go:164] Run: docker container inspect functional-422183 --format={{.State.Status}}
I0831 22:25:30.072450   51312 ssh_runner.go:195] Run: systemctl --version
I0831 22:25:30.072514   51312 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-422183
I0831 22:25:30.117365   51312 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/functional-422183/id_rsa Username:docker}
I0831 22:25:30.222668   51312 build_images.go:161] Building image from path: /tmp/build.4096541354.tar
I0831 22:25:30.222764   51312 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0831 22:25:30.236142   51312 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4096541354.tar
I0831 22:25:30.247897   51312 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4096541354.tar: stat -c "%s %y" /var/lib/minikube/build/build.4096541354.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4096541354.tar': No such file or directory
I0831 22:25:30.247929   51312 ssh_runner.go:362] scp /tmp/build.4096541354.tar --> /var/lib/minikube/build/build.4096541354.tar (3072 bytes)
I0831 22:25:30.282046   51312 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4096541354
I0831 22:25:30.293151   51312 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4096541354 -xf /var/lib/minikube/build/build.4096541354.tar
I0831 22:25:30.308264   51312 docker.go:360] Building image: /var/lib/minikube/build/build.4096541354
I0831 22:25:30.308413   51312 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-422183 /var/lib/minikube/build/build.4096541354
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:f485f2900af896d129c8b988fe0dc86da443b60046ce6187045e8bd30f5a3290 done
#8 naming to localhost/my-image:functional-422183 done
#8 DONE 0.1s
I0831 22:25:32.892040   51312 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-422183 /var/lib/minikube/build/build.4096541354: (2.583602775s)
I0831 22:25:32.892111   51312 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4096541354
I0831 22:25:32.901416   51312 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4096541354.tar
I0831 22:25:32.910390   51312 build_images.go:217] Built localhost/my-image:functional-422183 from /tmp/build.4096541354.tar
I0831 22:25:32.910420   51312 build_images.go:133] succeeded building to: functional-422183
I0831 22:25:32.910426   51312 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-422183
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image load --daemon kicbase/echo-server:functional-422183 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image load --daemon kicbase/echo-server:functional-422183 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
2024/08/31 22:25:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-422183
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image load --daemon kicbase/echo-server:functional-422183 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image save kicbase/echo-server:functional-422183 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image rm kicbase/echo-server:functional-422183 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-422183 docker-env) && out/minikube-linux-arm64 status -p functional-422183"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-422183 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-422183
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 image save --daemon kicbase/echo-server:functional-422183 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-422183
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-422183 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-422183
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-422183
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-422183
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (133.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-282715 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0831 22:26:10.777531    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:27:32.699720    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-282715 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m12.310920035s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (133.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (41.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-282715 -- rollout status deployment/busybox: (5.483260085s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-ft49r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-fxfdw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-rcthh -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-ft49r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-fxfdw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-rcthh -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-ft49r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-fxfdw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-rcthh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (41.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-ft49r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-ft49r -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-fxfdw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-fxfdw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-rcthh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-282715 -- exec busybox-7dff88458-rcthh -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-282715 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-282715 -v=7 --alsologtostderr: (24.338834424s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-282715 status -v=7 --alsologtostderr: (1.017903852s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-282715 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 status --output json -v=7 --alsologtostderr
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp testdata/cp-test.txt ha-282715:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile798410969/001/cp-test_ha-282715.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715:/home/docker/cp-test.txt ha-282715-m02:/home/docker/cp-test_ha-282715_ha-282715-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m02 "sudo cat /home/docker/cp-test_ha-282715_ha-282715-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715:/home/docker/cp-test.txt ha-282715-m03:/home/docker/cp-test_ha-282715_ha-282715-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m03 "sudo cat /home/docker/cp-test_ha-282715_ha-282715-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715:/home/docker/cp-test.txt ha-282715-m04:/home/docker/cp-test_ha-282715_ha-282715-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m04 "sudo cat /home/docker/cp-test_ha-282715_ha-282715-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp testdata/cp-test.txt ha-282715-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile798410969/001/cp-test_ha-282715-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715-m02:/home/docker/cp-test.txt ha-282715:/home/docker/cp-test_ha-282715-m02_ha-282715.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715 "sudo cat /home/docker/cp-test_ha-282715-m02_ha-282715.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715-m02:/home/docker/cp-test.txt ha-282715-m03:/home/docker/cp-test_ha-282715-m02_ha-282715-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m03 "sudo cat /home/docker/cp-test_ha-282715-m02_ha-282715-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715-m02:/home/docker/cp-test.txt ha-282715-m04:/home/docker/cp-test_ha-282715-m02_ha-282715-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m04 "sudo cat /home/docker/cp-test_ha-282715-m02_ha-282715-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp testdata/cp-test.txt ha-282715-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile798410969/001/cp-test_ha-282715-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715-m03:/home/docker/cp-test.txt ha-282715:/home/docker/cp-test_ha-282715-m03_ha-282715.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715 "sudo cat /home/docker/cp-test_ha-282715-m03_ha-282715.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715-m03:/home/docker/cp-test.txt ha-282715-m02:/home/docker/cp-test_ha-282715-m03_ha-282715-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m02 "sudo cat /home/docker/cp-test_ha-282715-m03_ha-282715-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715-m03:/home/docker/cp-test.txt ha-282715-m04:/home/docker/cp-test_ha-282715-m03_ha-282715-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m04 "sudo cat /home/docker/cp-test_ha-282715-m03_ha-282715-m04.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp testdata/cp-test.txt ha-282715-m04:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile798410969/001/cp-test_ha-282715-m04.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715-m04:/home/docker/cp-test.txt ha-282715:/home/docker/cp-test_ha-282715-m04_ha-282715.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715 "sudo cat /home/docker/cp-test_ha-282715-m04_ha-282715.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715-m04:/home/docker/cp-test.txt ha-282715-m02:/home/docker/cp-test_ha-282715-m04_ha-282715-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m02 "sudo cat /home/docker/cp-test_ha-282715-m04_ha-282715-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 cp ha-282715-m04:/home/docker/cp-test.txt ha-282715-m03:/home/docker/cp-test_ha-282715-m04_ha-282715-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 ssh -n ha-282715-m03 "sudo cat /home/docker/cp-test_ha-282715-m04_ha-282715-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-282715 node stop m02 -v=7 --alsologtostderr: (10.96030587s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-282715 status -v=7 --alsologtostderr: exit status 7 (813.534804ms)

                                                
                                                
-- stdout --
	ha-282715
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-282715-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-282715-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-282715-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:29:29.374624   74254 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:29:29.374818   74254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:29:29.374831   74254 out.go:358] Setting ErrFile to fd 2...
	I0831 22:29:29.374836   74254 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:29:29.375094   74254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
	I0831 22:29:29.375373   74254 out.go:352] Setting JSON to false
	I0831 22:29:29.375410   74254 mustload.go:65] Loading cluster: ha-282715
	I0831 22:29:29.375467   74254 notify.go:220] Checking for updates...
	I0831 22:29:29.376753   74254 config.go:182] Loaded profile config "ha-282715": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:29:29.376770   74254 status.go:255] checking status of ha-282715 ...
	I0831 22:29:29.377506   74254 cli_runner.go:164] Run: docker container inspect ha-282715 --format={{.State.Status}}
	I0831 22:29:29.396610   74254 status.go:330] ha-282715 host status = "Running" (err=<nil>)
	I0831 22:29:29.396639   74254 host.go:66] Checking if "ha-282715" exists ...
	I0831 22:29:29.396952   74254 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-282715")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-282715
	I0831 22:29:29.428977   74254 host.go:66] Checking if "ha-282715" exists ...
	I0831 22:29:29.429281   74254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:29:29.429324   74254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-282715
	I0831 22:29:29.456593   74254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/ha-282715/id_rsa Username:docker}
	I0831 22:29:29.549605   74254 ssh_runner.go:195] Run: systemctl --version
	I0831 22:29:29.555976   74254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:29:29.569697   74254 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:29:29.634555   74254 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-31 22:29:29.622457306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:29:29.635533   74254 kubeconfig.go:125] found "ha-282715" server: "https://192.168.49.254:8443"
	I0831 22:29:29.635613   74254 api_server.go:166] Checking apiserver status ...
	I0831 22:29:29.635738   74254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:29:29.647952   74254 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2332/cgroup
	I0831 22:29:29.657494   74254 api_server.go:182] apiserver freezer: "8:freezer:/docker/1b38264256def5583456d90b500dcb497579cd548cc9dd266ca0b041ee9cb64d/kubepods/burstable/podeab19e5f224de4d19db13f952e128ce1/f85f0c33ad1f6e92eb0f437e335102f7f587453bf2f65e4b36deeb7e588553ac"
	I0831 22:29:29.657575   74254 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1b38264256def5583456d90b500dcb497579cd548cc9dd266ca0b041ee9cb64d/kubepods/burstable/podeab19e5f224de4d19db13f952e128ce1/f85f0c33ad1f6e92eb0f437e335102f7f587453bf2f65e4b36deeb7e588553ac/freezer.state
	I0831 22:29:29.666537   74254 api_server.go:204] freezer state: "THAWED"
	I0831 22:29:29.666564   74254 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0831 22:29:29.674712   74254 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0831 22:29:29.674740   74254 status.go:422] ha-282715 apiserver status = Running (err=<nil>)
	I0831 22:29:29.674751   74254 status.go:257] ha-282715 status: &{Name:ha-282715 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:29:29.674768   74254 status.go:255] checking status of ha-282715-m02 ...
	I0831 22:29:29.675093   74254 cli_runner.go:164] Run: docker container inspect ha-282715-m02 --format={{.State.Status}}
	I0831 22:29:29.693121   74254 status.go:330] ha-282715-m02 host status = "Stopped" (err=<nil>)
	I0831 22:29:29.693142   74254 status.go:343] host is not running, skipping remaining checks
	I0831 22:29:29.693151   74254 status.go:257] ha-282715-m02 status: &{Name:ha-282715-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:29:29.693180   74254 status.go:255] checking status of ha-282715-m03 ...
	I0831 22:29:29.693491   74254 cli_runner.go:164] Run: docker container inspect ha-282715-m03 --format={{.State.Status}}
	I0831 22:29:29.712171   74254 status.go:330] ha-282715-m03 host status = "Running" (err=<nil>)
	I0831 22:29:29.712197   74254 host.go:66] Checking if "ha-282715-m03" exists ...
	I0831 22:29:29.712532   74254 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-282715")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-282715-m03
	I0831 22:29:29.728871   74254 host.go:66] Checking if "ha-282715-m03" exists ...
	I0831 22:29:29.729191   74254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:29:29.729242   74254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-282715-m03
	I0831 22:29:29.745918   74254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/ha-282715-m03/id_rsa Username:docker}
	I0831 22:29:29.844455   74254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:29:29.857154   74254 kubeconfig.go:125] found "ha-282715" server: "https://192.168.49.254:8443"
	I0831 22:29:29.857186   74254 api_server.go:166] Checking apiserver status ...
	I0831 22:29:29.857226   74254 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:29:29.871990   74254 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2125/cgroup
	I0831 22:29:29.883093   74254 api_server.go:182] apiserver freezer: "8:freezer:/docker/014ae53f82bf50b8f0907b25e5e98e5fa185d09447f09b3f1f068b5fcabe0b7c/kubepods/burstable/podf56c3ae31915b6c1ce59f1b3be5f57c8/89ae411512692973456c5761e49684635061bee311e6a440f9534303847d33b6"
	I0831 22:29:29.883192   74254 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/014ae53f82bf50b8f0907b25e5e98e5fa185d09447f09b3f1f068b5fcabe0b7c/kubepods/burstable/podf56c3ae31915b6c1ce59f1b3be5f57c8/89ae411512692973456c5761e49684635061bee311e6a440f9534303847d33b6/freezer.state
	I0831 22:29:29.893504   74254 api_server.go:204] freezer state: "THAWED"
	I0831 22:29:29.893531   74254 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0831 22:29:29.903488   74254 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0831 22:29:29.903520   74254 status.go:422] ha-282715-m03 apiserver status = Running (err=<nil>)
	I0831 22:29:29.903530   74254 status.go:257] ha-282715-m03 status: &{Name:ha-282715-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:29:29.903586   74254 status.go:255] checking status of ha-282715-m04 ...
	I0831 22:29:29.903990   74254 cli_runner.go:164] Run: docker container inspect ha-282715-m04 --format={{.State.Status}}
	I0831 22:29:29.922114   74254 status.go:330] ha-282715-m04 host status = "Running" (err=<nil>)
	I0831 22:29:29.922158   74254 host.go:66] Checking if "ha-282715-m04" exists ...
	I0831 22:29:29.922461   74254 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-282715")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-282715-m04
	I0831 22:29:29.941704   74254 host.go:66] Checking if "ha-282715-m04" exists ...
	I0831 22:29:29.942011   74254 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:29:29.942055   74254 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-282715-m04
	I0831 22:29:29.969462   74254 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/ha-282715-m04/id_rsa Username:docker}
	I0831 22:29:30.093509   74254 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:29:30.124247   74254 status.go:257] ha-282715-m04 status: &{Name:ha-282715-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (39.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 node start m02 -v=7 --alsologtostderr
E0831 22:29:39.365096    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:39.371492    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:39.382818    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:39.404204    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:39.445642    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:39.527068    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:39.688653    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:40.010216    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:40.651878    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:41.933249    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:44.495612    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:48.834333    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:49.617742    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:29:59.859990    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-282715 node start m02 -v=7 --alsologtostderr: (38.484032595s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-282715 status -v=7 --alsologtostderr: (1.347258353s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (39.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (4.13636854s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (266.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-282715 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-282715 -v=7 --alsologtostderr
E0831 22:30:16.541169    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:30:20.341658    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-282715 -v=7 --alsologtostderr: (34.597763679s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-282715 --wait=true -v=7 --alsologtostderr
E0831 22:31:01.303308    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:32:23.225581    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:34:39.365110    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-282715 --wait=true -v=7 --alsologtostderr: (3m52.123012854s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-282715
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (266.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 node delete m03 -v=7 --alsologtostderr
E0831 22:34:48.833966    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-282715 node delete m03 -v=7 --alsologtostderr: (10.616005264s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 stop -v=7 --alsologtostderr
E0831 22:35:07.067299    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-282715 stop -v=7 --alsologtostderr: (32.89650904s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-282715 status -v=7 --alsologtostderr: exit status 7 (109.697094ms)

                                                
                                                
-- stdout --
	ha-282715
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-282715-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-282715-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:35:26.720193  102039 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:35:26.720378  102039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:26.720408  102039 out.go:358] Setting ErrFile to fd 2...
	I0831 22:35:26.720428  102039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:35:26.720669  102039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
	I0831 22:35:26.720909  102039 out.go:352] Setting JSON to false
	I0831 22:35:26.720974  102039 mustload.go:65] Loading cluster: ha-282715
	I0831 22:35:26.721066  102039 notify.go:220] Checking for updates...
	I0831 22:35:26.721439  102039 config.go:182] Loaded profile config "ha-282715": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:35:26.721473  102039 status.go:255] checking status of ha-282715 ...
	I0831 22:35:26.722021  102039 cli_runner.go:164] Run: docker container inspect ha-282715 --format={{.State.Status}}
	I0831 22:35:26.741893  102039 status.go:330] ha-282715 host status = "Stopped" (err=<nil>)
	I0831 22:35:26.741914  102039 status.go:343] host is not running, skipping remaining checks
	I0831 22:35:26.741921  102039 status.go:257] ha-282715 status: &{Name:ha-282715 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:26.741944  102039 status.go:255] checking status of ha-282715-m02 ...
	I0831 22:35:26.742260  102039 cli_runner.go:164] Run: docker container inspect ha-282715-m02 --format={{.State.Status}}
	I0831 22:35:26.763134  102039 status.go:330] ha-282715-m02 host status = "Stopped" (err=<nil>)
	I0831 22:35:26.763174  102039 status.go:343] host is not running, skipping remaining checks
	I0831 22:35:26.763181  102039 status.go:257] ha-282715-m02 status: &{Name:ha-282715-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:35:26.763206  102039 status.go:255] checking status of ha-282715-m04 ...
	I0831 22:35:26.763480  102039 cli_runner.go:164] Run: docker container inspect ha-282715-m04 --format={{.State.Status}}
	I0831 22:35:26.783083  102039 status.go:330] ha-282715-m04 host status = "Stopped" (err=<nil>)
	I0831 22:35:26.783110  102039 status.go:343] host is not running, skipping remaining checks
	I0831 22:35:26.783117  102039 status.go:257] ha-282715-m04 status: &{Name:ha-282715-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (171.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-282715 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-282715 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m50.135736841s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (171.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-282715 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-282715 --control-plane -v=7 --alsologtostderr: (44.609615844s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-282715 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-282715 status -v=7 --alsologtostderr: (1.039584802s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (30.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-952995 --driver=docker  --container-runtime=docker
E0831 22:39:39.365422    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-952995 --driver=docker  --container-runtime=docker: (30.855671945s)
--- PASS: TestImageBuild/serial/Setup (30.86s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-952995
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-952995: (1.737578085s)
--- PASS: TestImageBuild/serial/NormalBuild (1.74s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-952995
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-952995: (1.028773856s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-952995
E0831 22:39:48.833731    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-952995
image_test.go:88: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-952995: (1.052680966s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-322204 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-322204 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (42.391861369s)
--- PASS: TestJSONOutput/start/Command (42.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-322204 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.51s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-322204 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.51s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-322204 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-322204 --output=json --user=testUser: (10.851385953s)
--- PASS: TestJSONOutput/stop/Command (10.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-741505 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-741505 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.409894ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8fa53b1f-4c33-462e-8f3c-30a5dfec13f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-741505] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e0d12bb-ca92-49c7-bd7c-3a34044cef72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"c861e842-24c7-49e2-a063-63a3e65d4c6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9f07a393-24df-4117-8a2c-72b69134a330","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18943-2279/kubeconfig"}}
	{"specversion":"1.0","id":"7df31656-4c54-48fd-a624-419ed9072a1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-2279/.minikube"}}
	{"specversion":"1.0","id":"35d4278a-a0ce-4605-9c3f-36dcc0809599","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a3517625-7c6b-4dbd-9f2e-d23cae99378c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cdab7276-7529-4d01-88cb-28467e21f7df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-741505" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-741505
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-298935 --network=
E0831 22:41:11.903291    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-298935 --network=: (35.548242141s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-298935" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-298935
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-298935: (2.039253264s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.62s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.52s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-605111 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-605111 --network=bridge: (34.449104848s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-605111" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-605111
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-605111: (2.029969736s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.52s)

                                                
                                    
x
+
TestKicExistingNetwork (32.92s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-195198 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-195198 --network=existing-network: (30.782375365s)
helpers_test.go:176: Cleaning up "existing-network-195198" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-195198
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-195198: (1.985602738s)
--- PASS: TestKicExistingNetwork (32.92s)

                                                
                                    
x
+
TestKicCustomSubnet (36.74s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-594138 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-594138 --subnet=192.168.60.0/24: (34.624352335s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-594138 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-594138" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-594138
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-594138: (2.089326634s)
--- PASS: TestKicCustomSubnet (36.74s)

                                                
                                    
x
+
TestKicStaticIP (35.61s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-470829 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-470829 --static-ip=192.168.200.200: (33.338849905s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-470829 ip
helpers_test.go:176: Cleaning up "static-ip-470829" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-470829
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-470829: (2.094312819s)
--- PASS: TestKicStaticIP (35.61s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-265590 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-265590 --driver=docker  --container-runtime=docker: (31.814125005s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-268027 --driver=docker  --container-runtime=docker
E0831 22:44:39.365779    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:44:48.833694    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-268027 --driver=docker  --container-runtime=docker: (35.252053015s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-265590
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-268027
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-268027" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-268027
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-268027: (2.050617696s)
helpers_test.go:176: Cleaning up "first-265590" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-265590
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-265590: (2.794064907s)
--- PASS: TestMinikubeProfile (73.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-703295 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-703295 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.14471899s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-703295 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-721805 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-721805 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.823818068s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-721805 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-703295 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-703295 --alsologtostderr -v=5: (1.454922534s)
--- PASS: TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-721805 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-721805
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-721805: (1.21724198s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.66s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-721805
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-721805: (7.657027908s)
--- PASS: TestMountStart/serial/RestartStopped (8.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-721805 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/CreateExtnet (0.08s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/CreateExtnet
multinetwork_test.go:99: (dbg) Run:  docker network create network-extnet-560275
multinetwork_test.go:104: external network network-extnet-560275 created
--- PASS: TestContainerIPsMultiNetwork/serial/CreateExtnet (0.08s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/FreshStart (39.12s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/FreshStart
multinetwork_test.go:148: (dbg) Run:  out/minikube-linux-arm64 start -p extnet-554795 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0831 22:46:02.429246    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
multinetwork_test.go:148: (dbg) Done: out/minikube-linux-arm64 start -p extnet-554795 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (39.09804411s)
multinetwork_test.go:161: cluster extnet-554795 started with address 192.168.67.2/
--- PASS: TestContainerIPsMultiNetwork/serial/FreshStart (39.12s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/ConnectExtnet (0.11s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/ConnectExtnet
multinetwork_test.go:113: (dbg) Run:  docker network connect network-extnet-560275 extnet-554795
multinetwork_test.go:126: cluster extnet-554795 was attached to network network-extnet-560275 with address 172.18.0.2/
--- PASS: TestContainerIPsMultiNetwork/serial/ConnectExtnet (0.11s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/Stop (10.99s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/Stop
multinetwork_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p extnet-554795 --alsologtostderr -v=5
multinetwork_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p extnet-554795 --alsologtostderr -v=5: (10.993029022s)
--- PASS: TestContainerIPsMultiNetwork/serial/Stop (10.99s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/VerifyStatus (0.07s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/VerifyStatus
helpers_test.go:700: (dbg) Run:  out/minikube-linux-arm64 status -p extnet-554795 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p extnet-554795 --output=json --layout=cluster: exit status 7 (68.636415ms)

                                                
                                                
-- stdout --
	{"Name":"extnet-554795","StatusCode":405,"StatusName":"Stopped","Step":"Done","StepDetail":"* 1 node stopped.","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":405,"StatusName":"Stopped"}},"Nodes":[{"Name":"extnet-554795","StatusCode":405,"StatusName":"Stopped","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestContainerIPsMultiNetwork/serial/VerifyStatus (0.07s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/Start (49.7s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/Start
multinetwork_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p extnet-554795 --alsologtostderr -v=5
multinetwork_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p extnet-554795 --alsologtostderr -v=5: (49.65876247s)
--- PASS: TestContainerIPsMultiNetwork/serial/Start (49.70s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/VerifyNetworks (0.02s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/VerifyNetworks
multinetwork_test.go:225: (dbg) Run:  docker inspect extnet-554795
--- PASS: TestContainerIPsMultiNetwork/serial/VerifyNetworks (0.02s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/Delete (2.32s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/Delete
multinetwork_test.go:253: (dbg) Run:  out/minikube-linux-arm64 delete -p extnet-554795 --alsologtostderr -v=5
multinetwork_test.go:253: (dbg) Done: out/minikube-linux-arm64 delete -p extnet-554795 --alsologtostderr -v=5: (2.321020848s)
--- PASS: TestContainerIPsMultiNetwork/serial/Delete (2.32s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/DeleteExtnet (0.11s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/DeleteExtnet
multinetwork_test.go:136: (dbg) Run:  docker network rm network-extnet-560275
multinetwork_test.go:140: external network network-extnet-560275 deleted
--- PASS: TestContainerIPsMultiNetwork/serial/DeleteExtnet (0.11s)

                                                
                                    
x
+
TestContainerIPsMultiNetwork/serial/VerifyDeletedResources (0.11s)

                                                
                                                
=== RUN   TestContainerIPsMultiNetwork/serial/VerifyDeletedResources
multinetwork_test.go:263: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
multinetwork_test.go:289: (dbg) Run:  docker ps -a
multinetwork_test.go:294: (dbg) Run:  docker volume inspect extnet-554795
multinetwork_test.go:294: (dbg) Non-zero exit: docker volume inspect extnet-554795: exit status 1 (14.238429ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get extnet-554795: no such volume

                                                
                                                
** /stderr **
multinetwork_test.go:299: (dbg) Run:  docker network ls
--- PASS: TestContainerIPsMultiNetwork/serial/VerifyDeletedResources (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (85.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-721483 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-721483 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m24.593383727s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (85.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (44.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-721483 -- rollout status deployment/busybox: (3.779657061s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- exec busybox-7dff88458-4tvq7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- exec busybox-7dff88458-b2jcd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- exec busybox-7dff88458-4tvq7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- exec busybox-7dff88458-b2jcd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- exec busybox-7dff88458-4tvq7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- exec busybox-7dff88458-b2jcd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (44.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- exec busybox-7dff88458-4tvq7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- exec busybox-7dff88458-4tvq7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- exec busybox-7dff88458-b2jcd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-721483 -- exec busybox-7dff88458-b2jcd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.12s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-721483 -v 3 --alsologtostderr
E0831 22:49:39.364915    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-721483 -v 3 --alsologtostderr: (18.569094614s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-721483 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 status --output json --alsologtostderr
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 cp testdata/cp-test.txt multinode-721483:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 cp multinode-721483:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3438721191/001/cp-test_multinode-721483.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 cp multinode-721483:/home/docker/cp-test.txt multinode-721483-m02:/home/docker/cp-test_multinode-721483_multinode-721483-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483-m02 "sudo cat /home/docker/cp-test_multinode-721483_multinode-721483-m02.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 cp multinode-721483:/home/docker/cp-test.txt multinode-721483-m03:/home/docker/cp-test_multinode-721483_multinode-721483-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483-m03 "sudo cat /home/docker/cp-test_multinode-721483_multinode-721483-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 cp testdata/cp-test.txt multinode-721483-m02:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483-m02 "sudo cat /home/docker/cp-test.txt"
E0831 22:49:48.833684    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 cp multinode-721483-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3438721191/001/cp-test_multinode-721483-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 cp multinode-721483-m02:/home/docker/cp-test.txt multinode-721483:/home/docker/cp-test_multinode-721483-m02_multinode-721483.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483 "sudo cat /home/docker/cp-test_multinode-721483-m02_multinode-721483.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 cp multinode-721483-m02:/home/docker/cp-test.txt multinode-721483-m03:/home/docker/cp-test_multinode-721483-m02_multinode-721483-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483-m03 "sudo cat /home/docker/cp-test_multinode-721483-m02_multinode-721483-m03.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 cp testdata/cp-test.txt multinode-721483-m03:/home/docker/cp-test.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 cp multinode-721483-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3438721191/001/cp-test_multinode-721483-m03.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 cp multinode-721483-m03:/home/docker/cp-test.txt multinode-721483:/home/docker/cp-test_multinode-721483-m03_multinode-721483.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483 "sudo cat /home/docker/cp-test_multinode-721483-m03_multinode-721483.txt"
helpers_test.go:557: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 cp multinode-721483-m03:/home/docker/cp-test.txt multinode-721483-m02:/home/docker/cp-test_multinode-721483-m03_multinode-721483-m02.txt
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 ssh -n multinode-721483-m02 "sudo cat /home/docker/cp-test_multinode-721483-m03_multinode-721483-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.36s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-721483 node stop m03: (1.231316235s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-721483 status: exit status 7 (499.979665ms)

                                                
                                                
-- stdout --
	multinode-721483
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-721483-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-721483-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-721483 status --alsologtostderr: exit status 7 (523.262858ms)

                                                
                                                
-- stdout --
	multinode-721483
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-721483-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-721483-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:49:56.625748  186156 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:49:56.625935  186156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:49:56.625949  186156 out.go:358] Setting ErrFile to fd 2...
	I0831 22:49:56.625954  186156 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:49:56.626231  186156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
	I0831 22:49:56.626443  186156 out.go:352] Setting JSON to false
	I0831 22:49:56.626501  186156 mustload.go:65] Loading cluster: multinode-721483
	I0831 22:49:56.626589  186156 notify.go:220] Checking for updates...
	I0831 22:49:56.626940  186156 config.go:182] Loaded profile config "multinode-721483": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:49:56.626959  186156 status.go:255] checking status of multinode-721483 ...
	I0831 22:49:56.627887  186156 cli_runner.go:164] Run: docker container inspect multinode-721483 --format={{.State.Status}}
	I0831 22:49:56.645446  186156 status.go:330] multinode-721483 host status = "Running" (err=<nil>)
	I0831 22:49:56.645471  186156 host.go:66] Checking if "multinode-721483" exists ...
	I0831 22:49:56.645789  186156 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-721483")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-721483
	I0831 22:49:56.675032  186156 host.go:66] Checking if "multinode-721483" exists ...
	I0831 22:49:56.675362  186156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:49:56.675419  186156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-721483
	I0831 22:49:56.693788  186156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/multinode-721483/id_rsa Username:docker}
	I0831 22:49:56.789444  186156 ssh_runner.go:195] Run: systemctl --version
	I0831 22:49:56.794308  186156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:49:56.807025  186156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0831 22:49:56.878347  186156 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-31 22:49:56.867900707 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0831 22:49:56.878939  186156 kubeconfig.go:125] found "multinode-721483" server: "https://192.168.67.2:8443"
	I0831 22:49:56.878971  186156 api_server.go:166] Checking apiserver status ...
	I0831 22:49:56.879074  186156 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0831 22:49:56.891523  186156 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2282/cgroup
	I0831 22:49:56.901089  186156 api_server.go:182] apiserver freezer: "8:freezer:/docker/ad3ded946e889199514df4846517a1fa866f8ed47c6af8f41ad5f8d0e8e5540f/kubepods/burstable/pod5f496048eff10cb4462e85bdd73fc821/1bb2f09fc365ab6c2f669b3a2577d269fd1f5f26d4edba3d5536699e0cea831d"
	I0831 22:49:56.901172  186156 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ad3ded946e889199514df4846517a1fa866f8ed47c6af8f41ad5f8d0e8e5540f/kubepods/burstable/pod5f496048eff10cb4462e85bdd73fc821/1bb2f09fc365ab6c2f669b3a2577d269fd1f5f26d4edba3d5536699e0cea831d/freezer.state
	I0831 22:49:56.910173  186156 api_server.go:204] freezer state: "THAWED"
	I0831 22:49:56.910198  186156 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0831 22:49:56.917945  186156 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0831 22:49:56.917974  186156 status.go:422] multinode-721483 apiserver status = Running (err=<nil>)
	I0831 22:49:56.917986  186156 status.go:257] multinode-721483 status: &{Name:multinode-721483 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:49:56.918038  186156 status.go:255] checking status of multinode-721483-m02 ...
	I0831 22:49:56.918360  186156 cli_runner.go:164] Run: docker container inspect multinode-721483-m02 --format={{.State.Status}}
	I0831 22:49:56.935240  186156 status.go:330] multinode-721483-m02 host status = "Running" (err=<nil>)
	I0831 22:49:56.935267  186156 host.go:66] Checking if "multinode-721483-m02" exists ...
	I0831 22:49:56.935580  186156 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-721483")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-721483-m02
	I0831 22:49:56.951771  186156 host.go:66] Checking if "multinode-721483-m02" exists ...
	I0831 22:49:56.952100  186156 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0831 22:49:56.952157  186156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-721483-m02
	I0831 22:49:56.969306  186156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32933 SSHKeyPath:/home/jenkins/minikube-integration/18943-2279/.minikube/machines/multinode-721483-m02/id_rsa Username:docker}
	I0831 22:49:57.061214  186156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0831 22:49:57.074812  186156 status.go:257] multinode-721483-m02 status: &{Name:multinode-721483-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:49:57.074852  186156 status.go:255] checking status of multinode-721483-m03 ...
	I0831 22:49:57.075345  186156 cli_runner.go:164] Run: docker container inspect multinode-721483-m03 --format={{.State.Status}}
	I0831 22:49:57.098106  186156 status.go:330] multinode-721483-m03 host status = "Stopped" (err=<nil>)
	I0831 22:49:57.098134  186156 status.go:343] host is not running, skipping remaining checks
	I0831 22:49:57.098149  186156 status.go:257] multinode-721483-m03 status: &{Name:multinode-721483-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-721483 node start m03 -v=7 --alsologtostderr: (10.708284002s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (106.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-721483
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-721483
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-721483: (22.810368232s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-721483 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-721483 --wait=true -v=8 --alsologtostderr: (1m23.446052316s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-721483
--- PASS: TestMultiNode/serial/RestartKeepsNodes (106.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-721483 node delete m03: (5.312630024s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-721483 stop: (21.529445176s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-721483 status: exit status 7 (95.992286ms)

                                                
                                                
-- stdout --
	multinode-721483
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-721483-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-721483 status --alsologtostderr: exit status 7 (98.611548ms)

                                                
                                                
-- stdout --
	multinode-721483
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-721483-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0831 22:52:23.054469  199760 out.go:345] Setting OutFile to fd 1 ...
	I0831 22:52:23.054627  199760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:52:23.054640  199760 out.go:358] Setting ErrFile to fd 2...
	I0831 22:52:23.054646  199760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0831 22:52:23.055382  199760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-2279/.minikube/bin
	I0831 22:52:23.055678  199760 out.go:352] Setting JSON to false
	I0831 22:52:23.055766  199760 mustload.go:65] Loading cluster: multinode-721483
	I0831 22:52:23.056267  199760 config.go:182] Loaded profile config "multinode-721483": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0831 22:52:23.056315  199760 status.go:255] checking status of multinode-721483 ...
	I0831 22:52:23.056805  199760 notify.go:220] Checking for updates...
	I0831 22:52:23.056905  199760 cli_runner.go:164] Run: docker container inspect multinode-721483 --format={{.State.Status}}
	I0831 22:52:23.075695  199760 status.go:330] multinode-721483 host status = "Stopped" (err=<nil>)
	I0831 22:52:23.075716  199760 status.go:343] host is not running, skipping remaining checks
	I0831 22:52:23.075724  199760 status.go:257] multinode-721483 status: &{Name:multinode-721483 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0831 22:52:23.075760  199760 status.go:255] checking status of multinode-721483-m02 ...
	I0831 22:52:23.076094  199760 cli_runner.go:164] Run: docker container inspect multinode-721483-m02 --format={{.State.Status}}
	I0831 22:52:23.104675  199760 status.go:330] multinode-721483-m02 host status = "Stopped" (err=<nil>)
	I0831 22:52:23.104713  199760 status.go:343] host is not running, skipping remaining checks
	I0831 22:52:23.104724  199760 status.go:257] multinode-721483-m02 status: &{Name:multinode-721483-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-721483 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-721483 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (54.202125247s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-721483 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-721483
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-721483-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-721483-m02 --driver=docker  --container-runtime=docker: exit status 14 (90.4355ms)

                                                
                                                
-- stdout --
	* [multinode-721483-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-2279/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-2279/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-721483-m02' is duplicated with machine name 'multinode-721483-m02' in profile 'multinode-721483'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-721483-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-721483-m03 --driver=docker  --container-runtime=docker: (32.297822385s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-721483
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-721483: exit status 80 (330.672825ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-721483 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-721483-m03 already exists in multinode-721483-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-721483-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-721483-m03: (2.090369549s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.89s)

                                                
                                    
x
+
TestPreload (140.18s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-379429 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0831 22:54:39.365706    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 22:54:48.833427    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-379429 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m42.679684961s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-379429 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-379429 image pull gcr.io/k8s-minikube/busybox: (2.130038487s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-379429
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-379429: (10.901929451s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-379429 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-379429 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (21.900601845s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-379429 image list
helpers_test.go:176: Cleaning up "test-preload-379429" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-379429
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-379429: (2.315061484s)
--- PASS: TestPreload (140.18s)

                                                
                                    
x
+
TestScheduledStopUnix (105.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-049537 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-049537 --memory=2048 --driver=docker  --container-runtime=docker: (31.766494723s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-049537 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-049537 -n scheduled-stop-049537
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-049537 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-049537 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-049537 -n scheduled-stop-049537
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-049537
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-049537 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0831 22:57:51.904745    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-049537
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-049537: exit status 7 (112.539877ms)

                                                
                                                
-- stdout --
	scheduled-stop-049537
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-049537 -n scheduled-stop-049537
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-049537 -n scheduled-stop-049537: exit status 7 (83.222831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-049537" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-049537
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-049537: (1.664382892s)
--- PASS: TestScheduledStopUnix (105.05s)

                                                
                                    
x
+
TestSkaffold (119.4s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe425832735 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-184599 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-184599 --memory=2600 --driver=docker  --container-runtime=docker: (31.801711377s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe425832735 run --minikube-profile skaffold-184599 --kube-context skaffold-184599 --status-check=true --port-forward=false --interactive=false
E0831 22:59:39.365289    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe425832735 run --minikube-profile skaffold-184599 --kube-context skaffold-184599 --status-check=true --port-forward=false --interactive=false: (1m10.68341782s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:345: "leeroy-app-5b67894f87-992hf" [93edc911-a2af-409d-a1ed-914f4afc0804] Running
E0831 22:59:48.833852    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.00400211s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:345: "leeroy-web-794dd8bf8-vgvr7" [a0a4842a-fbb2-42f8-99cd-1eae5d195acf] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003920447s
helpers_test.go:176: Cleaning up "skaffold-184599" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-184599
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-184599: (3.636953562s)
--- PASS: TestSkaffold (119.40s)

                                                
                                    
x
+
TestInsufficientStorage (12.06s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-975862 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-975862 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.689002945s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fe7f140f-c0b3-4a99-b03d-52d2e60c2f0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-975862] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"332b1873-7eab-4838-a1d3-0f96d86b431d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"d5012962-26aa-4039-b9fd-03cd11b414df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3e637a22-019d-497d-9d45-131cb033938d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18943-2279/kubeconfig"}}
	{"specversion":"1.0","id":"48bf7aaf-fbad-4ef1-9ae8-b9981c162e17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-2279/.minikube"}}
	{"specversion":"1.0","id":"a717fa65-0885-429e-967b-9e74a1733a35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"72c6e10e-a30c-4c19-adab-8e1b8a6a4c68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"78376aaa-67f7-4354-9b3b-1bd1857afbb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"39faae18-a6a2-4b74-a220-72ea811a472a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"82b0a845-5385-4120-a1db-283fe72efc59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f0579a48-c5cb-4df2-ae93-f4abef7186ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ff4fa28a-42b4-41e3-94f4-8656436f1ffc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-975862\" primary control-plane node in \"insufficient-storage-975862\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2f786417-24fd-4fc2-bba7-9ab8f8909404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724862063-19530 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"63813334-1e7f-400e-93c2-b1c3a081770d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e82fe3cf-21c4-4943-a06d-6b4893bc182f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:700: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-975862 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-975862 --output=json --layout=cluster: exit status 7 (279.003318ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-975862","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-975862","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 23:00:11.624364  233922 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-975862" does not appear in /home/jenkins/minikube-integration/18943-2279/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:700: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-975862 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-975862 --output=json --layout=cluster: exit status 7 (285.099949ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-975862","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-975862","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0831 23:00:11.910625  233982 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-975862" does not appear in /home/jenkins/minikube-integration/18943-2279/kubeconfig
	E0831 23:00:11.920903  233982 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/insufficient-storage-975862/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-975862" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-975862
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-975862: (1.805232223s)
--- PASS: TestInsufficientStorage (12.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (153.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.614115307 start -p running-upgrade-343596 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0831 23:04:39.365414    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.614115307 start -p running-upgrade-343596 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m42.517767568s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-343596 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0831 23:04:47.010087    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:04:47.016644    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:04:47.043527    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:04:47.064961    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:04:47.106484    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:04:47.187929    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:04:47.349393    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:04:47.670813    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:04:48.313007    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:04:48.834216    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:04:49.594821    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:04:52.156175    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:04:57.277643    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:05:07.519092    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-343596 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (47.645794748s)
helpers_test.go:176: Cleaning up "running-upgrade-343596" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-343596
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-343596: (2.29944651s)
--- PASS: TestRunningBinaryUpgrade (153.17s)

                                                
                                    
x
+
TestKubernetesUpgrade (372.64s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-230261 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0831 23:06:08.963899    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-230261 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.096498117s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-230261
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-230261: (1.223368688s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-230261 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-230261 status --format={{.Host}}: exit status 7 (70.806312ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-230261 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0831 23:07:30.886230    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-230261 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m47.293000611s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-230261 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-230261 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-230261 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (152.758962ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-230261] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-2279/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-2279/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-230261
	    minikube start -p kubernetes-upgrade-230261 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2302612 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-230261 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-230261 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-230261 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.968215304s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-230261" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-230261
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-230261: (2.665215175s)
--- PASS: TestKubernetesUpgrade (372.64s)

                                                
                                    
x
+
TestMissingContainerUpgrade (133.55s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3301394438 start -p missing-upgrade-664299 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3301394438 start -p missing-upgrade-664299 --memory=2200 --driver=docker  --container-runtime=docker: (54.014656099s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-664299
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-664299: (10.471386711s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-664299
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-664299 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-664299 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m5.581013163s)
helpers_test.go:176: Cleaning up "missing-upgrade-664299" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-664299
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-664299: (2.544906918s)
--- PASS: TestMissingContainerUpgrade (133.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (77.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3510449832 start -p stopped-upgrade-627232 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3510449832 start -p stopped-upgrade-627232 --memory=2200 --vm-driver=docker  --container-runtime=docker: (35.790282938s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3510449832 -p stopped-upgrade-627232 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3510449832 -p stopped-upgrade-627232 stop: (10.997217239s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-627232 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-627232 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.462402393s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (77.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-627232
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-627232: (1.802697182s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.80s)

                                                
                                    
x
+
TestPause/serial/Start (76.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-422275 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0831 23:09:39.364717    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:09:47.009977    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:09:48.833711    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:10:14.728072    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-422275 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m16.864095321s)
--- PASS: TestPause/serial/Start (76.86s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.85s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-422275 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-422275 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.828660475s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.85s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-422275 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
helpers_test.go:700: (dbg) Run:  out/minikube-linux-arm64 status -p pause-422275 --output=json --layout=cluster
helpers_test.go:700: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-422275 --output=json --layout=cluster: exit status 2 (404.855662ms)

                                                
                                                
-- stdout --
	{"Name":"pause-422275","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-422275","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-422275 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.54s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-422275 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.61s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-422275 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-422275 --alsologtostderr -v=5: (2.607148648s)
--- PASS: TestPause/serial/DeletePaused (2.61s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-422275
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-422275: exit status 1 (18.879163ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-422275: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-055032 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-055032 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (71.207601ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-055032] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-2279/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-2279/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-055032 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-055032 --driver=docker  --container-runtime=docker: (39.568579442s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-055032 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-055032 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-055032 --no-kubernetes --driver=docker  --container-runtime=docker: (17.012704987s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-055032 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-055032 status -o json: exit status 2 (319.430436ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-055032","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-055032
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-055032: (2.11952281s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-055032 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-055032 --no-kubernetes --driver=docker  --container-runtime=docker: (11.994449858s)
--- PASS: TestNoKubernetes/serial/Start (11.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-055032 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-055032 "sudo systemctl is-active --quiet service kubelet": exit status 1 (263.349461ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-055032
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-055032: (1.291636433s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m21.701283767s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-055032 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-055032 --driver=docker  --container-runtime=docker: (8.984813623s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-055032 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-055032 "sudo systemctl is-active --quiet service kubelet": exit status 1 (359.959364ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m10.66692605s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-254174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-254174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-2tkfs" [c1e5acad-5836-4c22-a947-3c4b514f148b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-2tkfs" [c1e5acad-5836-4c22-a947-3c4b514f148b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003940135s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:345: "kindnet-xsmcj" [c9f73be7-320f-4c6c-b8a3-a1eceb47c3df] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006496234s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-254174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-254174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-cp4jp" [24a6c90a-9778-45b1-b697-7b3cbf506d6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-cp4jp" [24a6c90a-9778-45b1-b697-7b3cbf506d6e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.005069034s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-254174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-254174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m19.80446584s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0831 23:14:31.906208    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:14:39.364981    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:14:47.009463    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:14:48.833609    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m7.55168957s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:345: "calico-node-54f8m" [8c256537-5751-4a5d-88bf-e645b846af37] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006059739s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-254174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-254174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-hbgk2" [61a8301d-d107-47eb-9e9a-18a4517a1391] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-hbgk2" [61a8301d-d107-47eb-9e9a-18a4517a1391] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.008331291s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-254174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-254174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-w4mnq" [a43fa775-d108-4e60-b5b5-17c12f8621bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-w4mnq" [a43fa775-d108-4e60-b5b5-17c12f8621bc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004726826s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-254174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-254174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (85.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m25.572368206s)
--- PASS: TestNetworkPlugins/group/false/Start (85.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (53.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (53.679580461s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (53.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-254174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-254174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-9n59l" [b73168f1-341a-4b8d-821d-f8508982ca74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-9n59l" [b73168f1-341a-4b8d-821d-f8508982ca74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005282857s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-254174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-254174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-254174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-zb48s" [8215e0f9-7abc-40d3-b9d3-e8f6ba82e5ea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-zb48s" [8215e0f9-7abc-40d3-b9d3-e8f6ba82e5ea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004291157s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m2.120772571s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-254174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (55.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0831 23:18:36.921072    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:36.927654    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:36.939019    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:36.960321    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:37.001644    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:37.083180    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:37.193555    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:37.199924    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:37.211263    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:37.232563    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:37.244870    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:37.274229    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:37.356103    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:37.518107    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:37.566340    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:37.839710    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:38.208313    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:38.481781    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:39.490528    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:39.763046    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:42.052815    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:42.324768    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (55.095334951s)
--- PASS: TestNetworkPlugins/group/bridge/Start (55.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:345: "kube-flannel-ds-fxgjj" [5b8f5fc2-fe62-442c-8689-5525a7b08f9d] Running
E0831 23:18:47.174535    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:47.446166    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004345125s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-254174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-254174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-zd4n4" [fcb979d0-ab61-48ae-ab42-519395887f6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-zd4n4" [fcb979d0-ab61-48ae-ab42-519395887f6f] Running
E0831 23:18:57.415995    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:18:57.688072    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.00359209s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-254174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-254174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-254174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-lfj5l" [400d047c-0982-4b81-a048-4a02aaf1655b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:345: "netcat-6fc964789b-lfj5l" [400d047c-0982-4b81-a048-4a02aaf1655b] Running
E0831 23:19:17.897928    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:19:18.169567    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004390326s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-254174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (58.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0831 23:19:39.365815    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-254174 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (58.367773958s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (58.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (181.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-326446 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0831 23:19:47.009511    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:19:48.834220    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:19:58.859972    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:19:59.130953    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-326446 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (3m1.787696799s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (181.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-254174 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-254174 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:345: "netcat-6fc964789b-4hl5w" [4122711d-f31e-4d19-9de7-151d325cc5eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0831 23:20:29.167811    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:29.174110    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:29.185377    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:29.206728    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "netcat-6fc964789b-4hl5w" [4122711d-f31e-4d19-9de7-151d325cc5eb] Running
E0831 23:20:29.248747    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:29.330307    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:29.492134    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:29.813918    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:30.456162    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:30.698982    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:30.705574    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:30.717167    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:30.738796    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:30.780305    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:30.861880    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:31.023494    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:31.345731    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:31.738623    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:31.987280    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:33.269697    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:20:34.301013    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.005449889s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-254174 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0831 23:20:35.831086    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-254174 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-327606 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0831 23:21:10.089902    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:10.145296    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:11.680552    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:20.781226    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:21.052956    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-327606 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (51.100347043s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-327606 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [c4aa5a05-ac48-4472-a39a-864dadcd6aa8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0831 23:21:51.107330    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:21:52.642681    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "busybox" [c4aa5a05-ac48-4472-a39a-864dadcd6aa8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003645478s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-327606 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-327606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-327606 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-327606 --alsologtostderr -v=3
E0831 23:22:09.150775    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:09.157310    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:09.169024    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:09.190346    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:09.231704    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:09.313163    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:09.474849    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:09.796311    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:10.438629    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-327606 --alsologtostderr -v=3: (11.10184993s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-327606 -n no-preload-327606
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-327606 -n no-preload-327606: exit status 7 (64.832818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-327606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (278.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-327606 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0831 23:22:11.720362    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:14.281991    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:19.403468    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:29.645751    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:36.724421    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:36.730907    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:36.742340    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:36.763789    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:36.805175    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:36.886550    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:37.048129    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:37.370034    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:38.012531    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:39.294485    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:22:41.855773    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-327606 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m38.103625835s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-327606 -n no-preload-327606
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (278.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-326446 create -f testdata/busybox.yaml
E0831 23:22:46.977187    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [ae176ff4-0ef3-462f-aee4-5cc0acd583bb] Pending
helpers_test.go:345: "busybox" [ae176ff4-0ef3-462f-aee4-5cc0acd583bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0831 23:22:50.127850    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:345: "busybox" [ae176ff4-0ef3-462f-aee4-5cc0acd583bb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.049273077s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-326446 exec busybox -- /bin/sh -c "ulimit -n"
E0831 23:22:57.219328    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-326446 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-326446 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-326446 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-326446 --alsologtostderr -v=3: (11.139301504s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-326446 -n old-k8s-version-326446
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-326446 -n old-k8s-version-326446: exit status 7 (76.37649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-326446 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (134.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-326446 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0831 23:23:13.030132    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:14.564099    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:17.700965    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:31.089942    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:36.919873    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:37.193699    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:42.789930    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:42.796299    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:42.807695    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:42.829173    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:42.870577    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:42.952209    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:43.113652    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:43.435878    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:44.078003    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:45.359673    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:47.921373    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:53.042722    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:23:58.662688    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:03.284566    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:04.622578    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:04.895059    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:08.458409    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:08.465141    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:08.476515    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:08.497994    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:08.539659    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:08.621170    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:08.782657    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:09.104317    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:09.745680    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:11.027575    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:13.589391    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:18.711354    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:23.765943    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:28.953041    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:39.365126    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:47.009843    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:48.834003    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:49.434999    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:24:53.011692    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:04.728236    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:20.585043    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-326446 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m14.507766105s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-326446 -n old-k8s-version-326446
E0831 23:25:24.204049    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:24.210420    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:24.221844    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:24.244316    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:24.285680    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:24.366988    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:24.528435    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (134.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-cd95d586-qrm8z" [5225c202-3887-4cb9-9675-b872eedbdc1e] Running
E0831 23:25:24.850148    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:25.492149    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:26.774451    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:29.167801    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:29.336266    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:30.396520    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003439421s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-cd95d586-qrm8z" [5225c202-3887-4cb9-9675-b872eedbdc1e] Running
E0831 23:25:30.699204    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:34.457653    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003338437s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-326446 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-326446 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-326446 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-326446 -n old-k8s-version-326446
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-326446 -n old-k8s-version-326446: exit status 2 (347.773624ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-326446 -n old-k8s-version-326446
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-326446 -n old-k8s-version-326446: exit status 2 (350.26404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-326446 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-326446 -n old-k8s-version-326446
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-326446 -n old-k8s-version-326446
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (74.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-998236 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0831 23:25:44.699044    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:56.872477    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:25:58.406061    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:26:05.180996    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:26:26.650018    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:26:46.142879    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-998236 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m14.978260595s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (74.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-rtjcg" [049f4e9b-9091-47ba-8695-b14c7eeb9839] Running
E0831 23:26:52.318791    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004903686s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-rtjcg" [049f4e9b-9091-47ba-8695-b14c7eeb9839] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00444134s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-327606 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-998236 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [ba09290f-8837-44f6-8ab1-0025d20d2531] Pending
helpers_test.go:345: "busybox" [ba09290f-8837-44f6-8ab1-0025d20d2531] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [ba09290f-8837-44f6-8ab1-0025d20d2531] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00367942s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-998236 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-327606 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-327606 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-327606 -n no-preload-327606
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-327606 -n no-preload-327606: exit status 2 (323.005701ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-327606 -n no-preload-327606
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-327606 -n no-preload-327606: exit status 2 (336.890252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-327606 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-327606 -n no-preload-327606
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-327606 -n no-preload-327606
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-339137 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-339137 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m15.666860471s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (75.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-998236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-998236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.198666103s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-998236 describe deploy/metrics-server -n kube-system
E0831 23:27:09.149998    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-998236 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-998236 --alsologtostderr -v=3: (11.060282578s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-998236 -n embed-certs-998236
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-998236 -n embed-certs-998236: exit status 7 (80.851283ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-998236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (272.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-998236 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0831 23:27:36.724402    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:27:36.853709    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/enable-default-cni-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:27:46.984466    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:27:46.990814    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:27:47.002333    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:27:47.024050    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:27:47.065717    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:27:47.148129    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:27:47.310322    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:27:47.631618    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:27:48.273339    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:27:49.554825    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:27:52.116124    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:27:57.238451    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:28:04.426641    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:28:07.479917    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:28:08.065115    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-998236 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m32.219437141s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-998236 -n embed-certs-998236
E0831 23:31:53.048040    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (272.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-339137 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:345: "busybox" [05817ed8-f596-486e-8d68-d33b050b8627] Pending
helpers_test.go:345: "busybox" [05817ed8-f596-486e-8d68-d33b050b8627] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:345: "busybox" [05817ed8-f596-486e-8d68-d33b050b8627] Running
E0831 23:28:27.961657    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004466047s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-339137 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-339137 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-339137 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.005072067s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-339137 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-339137 --alsologtostderr -v=3
E0831 23:28:36.919844    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:28:37.193668    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:28:42.790342    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-339137 --alsologtostderr -v=3: (10.935029841s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-339137 -n default-k8s-diff-port-339137
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-339137 -n default-k8s-diff-port-339137: exit status 7 (73.265499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-339137 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (281.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-339137 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0831 23:29:08.458277    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:29:08.923317    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:29:10.491662    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:29:36.160915    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/bridge-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:29:39.365673    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/functional-422183/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:29:47.015639    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/skaffold-184599/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:29:48.833380    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:24.204482    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:29.167392    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/calico-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:30.699295    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/custom-flannel-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:30.844683    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:30:51.906391    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kubenet-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:31:11.907838    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/addons-742639/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:31:50.478578    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:31:50.485010    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:31:50.496366    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:31:50.517748    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:31:50.559105    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:31:50.640513    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:31:50.801861    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:31:51.123568    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:31:51.765762    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-339137 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m41.327924964s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-339137 -n default-k8s-diff-port-339137
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (281.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-zbz8x" [e40767a2-fefd-4b68-9ab8-7eaf61d9cafd] Running
E0831 23:31:55.609914    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004499466s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-zbz8x" [e40767a2-fefd-4b68-9ab8-7eaf61d9cafd] Running
E0831 23:32:00.731631    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003514699s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-998236 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-998236 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-998236 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-998236 -n embed-certs-998236
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-998236 -n embed-certs-998236: exit status 2 (303.744936ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-998236 -n embed-certs-998236
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-998236 -n embed-certs-998236: exit status 2 (344.521965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-998236 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-998236 -n embed-certs-998236
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-998236 -n embed-certs-998236
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-276788 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0831 23:32:10.973350    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:32:31.455323    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:32:36.724363    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/false-254174/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:32:46.985219    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-276788 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (37.162967977s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-276788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-276788 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.12499547s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-276788 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-276788 --alsologtostderr -v=3: (9.109310587s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-276788 -n newest-cni-276788
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-276788 -n newest-cni-276788: exit status 7 (75.425019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-276788 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-276788 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0831 23:33:12.416651    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/no-preload-327606/client.crt: no such file or directory" logger="UnhandledError"
E0831 23:33:14.686873    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/old-k8s-version-326446/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-276788 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (19.358725354s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-276788 -n newest-cni-276788
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-276788 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-276788 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-276788 -n newest-cni-276788
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-276788 -n newest-cni-276788: exit status 2 (406.887749ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-276788 -n newest-cni-276788
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-276788 -n newest-cni-276788: exit status 2 (403.879065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-276788 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-276788 -n newest-cni-276788
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-276788 -n newest-cni-276788
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-5f7bc" [89f87cb2-b660-482f-b383-fb88bd71f7f6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003768307s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:345: "kubernetes-dashboard-695b96c756-5f7bc" [89f87cb2-b660-482f-b383-fb88bd71f7f6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002936499s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-339137 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-339137 image list --format=json
E0831 23:33:36.919564    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/auto-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-339137 --alsologtostderr -v=1
E0831 23:33:37.194064    7597 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18943-2279/.minikube/profiles/kindnet-254174/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-339137 -n default-k8s-diff-port-339137
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-339137 -n default-k8s-diff-port-339137: exit status 2 (321.20565ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-339137 -n default-k8s-diff-port-339137
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-339137 -n default-k8s-diff-port-339137: exit status 2 (339.471488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-339137 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-339137 -n default-k8s-diff-port-339137
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-339137 -n default-k8s-diff-port-339137
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.80s)

                                                
                                    

Test skip (24/353)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-262633 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-262633" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-262633
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-254174 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-254174" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-254174

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-254174" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-254174"

                                                
                                                
----------------------- debugLogs end: cilium-254174 [took: 5.723404374s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-254174" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-254174
--- SKIP: TestNetworkPlugins/group/cilium (5.89s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-384321" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-384321
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard